00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 229 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.216 > git --version # 'git version 2.39.2' 00:00:00.216 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.486 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.496 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.509 Checking out Revision 974e6abc19174775f0f1ea53bba692f31ffb01a8 (FETCH_HEAD) 00:00:05.509 > git config core.sparsecheckout # timeout=10 00:00:05.519 > git read-tree -mu HEAD # timeout=10 00:00:05.538 > git checkout -f 974e6abc19174775f0f1ea53bba692f31ffb01a8 # timeout=5 00:00:05.556 Commit message: "jenkins/config: change SM0 ip due to lab relocation" 00:00:05.556 > git rev-list --no-walk 974e6abc19174775f0f1ea53bba692f31ffb01a8 # timeout=10 00:00:05.630 [Pipeline] Start of Pipeline 00:00:05.643 [Pipeline] library 00:00:05.644 Loading library shm_lib@master 00:00:05.645 Library shm_lib@master is cached. Copying from home. 00:00:05.660 [Pipeline] node 00:00:05.676 Running on GP14 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.677 [Pipeline] { 00:00:05.687 [Pipeline] catchError 00:00:05.688 [Pipeline] { 00:00:05.700 [Pipeline] wrap 00:00:05.712 [Pipeline] { 00:00:05.720 [Pipeline] stage 00:00:05.721 [Pipeline] { (Prologue) 00:00:05.929 [Pipeline] sh 00:00:06.203 + logger -p user.info -t JENKINS-CI 00:00:06.220 [Pipeline] echo 00:00:06.221 Node: GP14 00:00:06.229 [Pipeline] sh 00:00:06.520 [Pipeline] setCustomBuildProperty 00:00:06.532 [Pipeline] echo 00:00:06.533 Cleanup processes 00:00:06.537 [Pipeline] sh 00:00:06.814 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.814 923074 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.825 [Pipeline] sh 00:00:07.105 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.105 ++ grep -v 'sudo pgrep' 00:00:07.105 ++ awk '{print $1}' 00:00:07.105 + sudo kill -9 00:00:07.105 + true 00:00:07.119 [Pipeline] cleanWs 00:00:07.128 [WS-CLEANUP] Deleting project workspace... 00:00:07.128 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.134 [WS-CLEANUP] done 00:00:07.139 [Pipeline] setCustomBuildProperty 00:00:07.153 [Pipeline] sh 00:00:07.427 + sudo git config --global --replace-all safe.directory '*' 00:00:07.491 [Pipeline] nodesByLabel 00:00:07.493 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.502 [Pipeline] httpRequest 00:00:07.506 HttpMethod: GET 00:00:07.507 URL: http://10.211.164.101/packages/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:07.513 Sending request to url: http://10.211.164.101/packages/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:07.520 Response Code: HTTP/1.1 200 OK 00:00:07.521 Success: Status code 200 is in the accepted range: 200,404 00:00:07.522 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:08.290 [Pipeline] sh 00:00:08.575 + tar --no-same-owner -xf jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:08.595 [Pipeline] httpRequest 00:00:08.599 HttpMethod: GET 00:00:08.600 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:08.600 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:08.619 Response Code: HTTP/1.1 200 OK 00:00:08.620 Success: Status code 200 is in the accepted range: 200,404 00:00:08.620 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:44.320 [Pipeline] sh 00:00:44.602 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:48.800 [Pipeline] sh 00:00:49.079 + git -C spdk log --oneline -n5 00:00:49.079 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:00:49.079 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:00:49.079 54944c1d1 event: don't NOTICELOG when no RPC server started 00:00:49.079 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:00:49.079 5dc808124 init: add spdk_subsystem_exists() 00:00:49.092 [Pipeline] sh 00:00:49.372 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/89/22689/2 00:00:50.307 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:50.307 * branch refs/changes/89/22689/2 -> FETCH_HEAD 00:00:50.317 [Pipeline] sh 00:00:50.595 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:51.968 Previous HEAD position was afe4186365 pmdinfogen: avoid empty string in ELFSymbol() 00:00:51.968 HEAD is now at d5497a26cb isal: compile compress_isal PMD without system-wide libisal 00:00:51.977 [Pipeline] } 00:00:51.992 [Pipeline] // stage 00:00:51.998 [Pipeline] stage 00:00:52.000 [Pipeline] { (Prepare) 00:00:52.012 [Pipeline] writeFile 00:00:52.024 [Pipeline] sh 00:00:52.304 + logger -p user.info -t JENKINS-CI 00:00:52.318 [Pipeline] sh 00:00:52.597 + logger -p user.info -t JENKINS-CI 00:00:52.609 [Pipeline] sh 00:00:52.895 + cat autorun-spdk.conf 00:00:52.896 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.896 SPDK_TEST_NVMF=1 00:00:52.896 SPDK_TEST_NVME_CLI=1 00:00:52.896 SPDK_TEST_NVMF_NICS=mlx5 00:00:52.896 SPDK_RUN_UBSAN=1 00:00:52.896 NET_TYPE=phy 00:00:52.903 RUN_NIGHTLY= 00:00:52.907 [Pipeline] readFile 00:00:52.928 [Pipeline] withEnv 00:00:52.930 [Pipeline] { 00:00:52.943 [Pipeline] sh 00:00:53.238 + set -ex 00:00:53.238 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:53.238 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:53.238 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.238 ++ SPDK_TEST_NVMF=1 00:00:53.238 ++ SPDK_TEST_NVME_CLI=1 00:00:53.238 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:53.238 ++ SPDK_RUN_UBSAN=1 00:00:53.238 ++ NET_TYPE=phy 00:00:53.238 ++ RUN_NIGHTLY= 00:00:53.238 + case $SPDK_TEST_NVMF_NICS in 00:00:53.238 + DRIVERS=mlx5_ib 00:00:53.238 + [[ -n mlx5_ib ]] 00:00:53.238 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:53.238 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:01.343 rmmod: ERROR: Module irdma is not currently loaded 00:01:01.343 rmmod: ERROR: Module i40iw is not currently loaded 00:01:01.343 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:01.343 + true 00:01:01.343 + for D in $DRIVERS 00:01:01.343 + sudo modprobe mlx5_ib 00:01:01.343 + exit 0 00:01:01.351 [Pipeline] } 00:01:01.367 [Pipeline] // withEnv 00:01:01.371 [Pipeline] } 00:01:01.381 [Pipeline] // stage 00:01:01.389 [Pipeline] catchError 00:01:01.390 [Pipeline] { 00:01:01.402 [Pipeline] timeout 00:01:01.402 Timeout set to expire in 40 min 00:01:01.403 [Pipeline] { 00:01:01.418 [Pipeline] stage 00:01:01.420 [Pipeline] { (Tests) 00:01:01.434 [Pipeline] sh 00:01:01.712 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:01.712 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:01.712 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:01.712 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:01.712 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:01.712 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:01.712 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:01.712 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:01.712 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:01.712 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:01.712 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:01.712 + source /etc/os-release 00:01:01.712 ++ NAME='Fedora Linux' 00:01:01.712 ++ VERSION='38 (Cloud Edition)' 00:01:01.712 ++ ID=fedora 00:01:01.712 ++ VERSION_ID=38 00:01:01.712 ++ VERSION_CODENAME= 00:01:01.712 ++ PLATFORM_ID=platform:f38 00:01:01.712 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:01.712 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.712 ++ LOGO=fedora-logo-icon 00:01:01.712 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:01.712 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.712 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:01.712 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.712 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.712 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.712 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:01.712 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.712 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:01.712 ++ SUPPORT_END=2024-05-14 00:01:01.712 ++ VARIANT='Cloud Edition' 00:01:01.712 ++ VARIANT_ID=cloud 00:01:01.712 + uname -a 00:01:01.712 Linux spdk-gp-14 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:01.712 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:03.085 Hugepages 00:01:03.085 node hugesize free / total 00:01:03.085 node0 1048576kB 0 / 0 00:01:03.085 node0 2048kB 0 / 0 00:01:03.085 node1 1048576kB 0 / 0 00:01:03.085 node1 2048kB 0 / 0 00:01:03.085 00:01:03.085 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.085 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:03.085 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:03.085 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:03.085 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:03.085 + rm -f /tmp/spdk-ld-path 00:01:03.085 + source autorun-spdk.conf 00:01:03.085 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.085 ++ SPDK_TEST_NVMF=1 00:01:03.085 ++ SPDK_TEST_NVME_CLI=1 00:01:03.085 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:03.085 ++ SPDK_RUN_UBSAN=1 00:01:03.085 ++ NET_TYPE=phy 00:01:03.085 ++ RUN_NIGHTLY= 00:01:03.085 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.085 + [[ -n '' ]] 00:01:03.085 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:03.085 + for M in /var/spdk/build-*-manifest.txt 00:01:03.085 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.085 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:03.085 + for M in /var/spdk/build-*-manifest.txt 00:01:03.085 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.085 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:03.085 ++ uname 00:01:03.085 + [[ Linux == \L\i\n\u\x ]] 00:01:03.085 + sudo dmesg -T 00:01:03.085 + sudo dmesg --clear 00:01:03.085 + dmesg_pid=924016 00:01:03.085 + [[ Fedora Linux == FreeBSD ]] 00:01:03.085 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.085 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.085 + sudo dmesg -Tw 00:01:03.085 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.085 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:03.085 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:03.085 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.085 + export FIO_BIN=/usr/src/fio-static/fio 00:01:03.085 + FIO_BIN=/usr/src/fio-static/fio 00:01:03.085 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.085 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.085 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.085 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.085 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.085 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.085 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.085 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.085 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:03.085 Test configuration: 00:01:03.085 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.085 SPDK_TEST_NVMF=1 00:01:03.085 SPDK_TEST_NVME_CLI=1 00:01:03.085 SPDK_TEST_NVMF_NICS=mlx5 00:01:03.085 SPDK_RUN_UBSAN=1 00:01:03.085 NET_TYPE=phy 00:01:03.085 RUN_NIGHTLY= 13:30:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:03.085 13:30:05 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.085 13:30:05 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.085 13:30:05 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.085 13:30:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.085 13:30:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.085 13:30:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.085 13:30:05 -- paths/export.sh@5 -- $ export PATH 00:01:03.085 13:30:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.085 13:30:05 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:03.085 13:30:05 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:03.085 13:30:05 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713439805.XXXXXX 00:01:03.085 13:30:05 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713439805.hTttCd 00:01:03.085 13:30:05 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:03.085 13:30:05 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:03.085 13:30:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:03.085 13:30:05 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.085 13:30:05 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.085 13:30:05 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:03.085 13:30:05 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:03.085 13:30:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.085 13:30:05 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:03.085 13:30:05 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:03.085 13:30:05 -- pm/common@17 -- $ local monitor 00:01:03.085 13:30:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.085 13:30:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=924050 00:01:03.085 13:30:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.085 13:30:05 -- pm/common@21 -- $ date +%s 00:01:03.085 13:30:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=924052 00:01:03.085 13:30:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.085 13:30:05 -- pm/common@21 -- $ date +%s 00:01:03.085 13:30:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=924055 00:01:03.085 13:30:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.085 13:30:05 -- pm/common@21 -- $ date +%s 00:01:03.085 13:30:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=924058 00:01:03.085 13:30:05 -- pm/common@26 -- $ sleep 1 00:01:03.085 13:30:05 -- pm/common@21 -- $ date +%s 00:01:03.085 13:30:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439805 00:01:03.085 13:30:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439805 00:01:03.085 13:30:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439805 00:01:03.086 13:30:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439805 00:01:03.086 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439805_collect-vmstat.pm.log 00:01:03.086 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439805_collect-bmc-pm.bmc.pm.log 00:01:03.086 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439805_collect-cpu-load.pm.log 00:01:03.086 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439805_collect-cpu-temp.pm.log 00:01:04.019 13:30:06 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:04.019 13:30:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.019 13:30:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.019 13:30:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:04.019 13:30:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.019 Thu Apr 18 11:30:06 AM UTC 2024 00:01:04.019 13:30:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:04.276 v24.05-pre-407-g65b4e17c6 00:01:04.276 13:30:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:04.276 13:30:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:04.276 13:30:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:04.276 13:30:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:04.276 13:30:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:04.276 13:30:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.276 ************************************ 00:01:04.276 START TEST ubsan 00:01:04.276 ************************************ 00:01:04.276 13:30:06 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:04.276 using ubsan 00:01:04.276 00:01:04.276 real 0m0.000s 00:01:04.276 user 0m0.000s 00:01:04.276 sys 0m0.000s 00:01:04.276 13:30:06 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:04.276 13:30:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.276 ************************************ 00:01:04.276 END TEST ubsan 00:01:04.276 ************************************ 00:01:04.276 13:30:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:04.276 13:30:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:04.276 13:30:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:04.276 13:30:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:04.276 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:04.276 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:04.842 Using 'verbs' RDMA provider 00:01:17.622 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:29.823 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.823 Creating mk/config.mk...done. 00:01:29.823 Creating mk/cc.flags.mk...done. 00:01:29.823 Type 'make' to build. 00:01:29.823 13:30:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:29.823 13:30:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:29.823 13:30:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:29.823 13:30:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.823 ************************************ 00:01:29.823 START TEST make 00:01:29.823 ************************************ 00:01:29.823 13:30:32 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:29.823 make[1]: Nothing to be done for 'all'. 00:01:39.820 The Meson build system 00:01:39.820 Version: 1.3.1 00:01:39.820 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:39.820 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:39.820 Build type: native build 00:01:39.820 Program cat found: YES (/usr/bin/cat) 00:01:39.820 Project name: DPDK 00:01:39.820 Project version: 24.03.0 00:01:39.820 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.820 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.820 Host machine cpu family: x86_64 00:01:39.820 Host machine cpu: x86_64 00:01:39.820 Message: ## Building in Developer Mode ## 00:01:39.820 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.820 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.820 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.820 Program python3 found: YES (/usr/bin/python3) 00:01:39.820 Program cat found: YES (/usr/bin/cat) 00:01:39.820 Compiler for C supports arguments -march=native: YES 00:01:39.820 Checking for size of "void *" : 8 00:01:39.820 Checking for size of "void *" : 8 (cached) 00:01:39.820 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:39.820 Library m found: YES 00:01:39.820 Library numa found: YES 00:01:39.820 Has header "numaif.h" : YES 00:01:39.820 Library fdt found: NO 00:01:39.820 Library execinfo found: NO 00:01:39.820 Has header "execinfo.h" : YES 00:01:39.820 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.820 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.820 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.820 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.820 Run-time dependency openssl found: YES 3.0.9 00:01:39.820 Run-time dependency libpcap found: YES 1.10.4 00:01:39.820 Has header "pcap.h" with dependency libpcap: YES 00:01:39.820 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.820 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.820 Compiler for C supports arguments -Wformat: YES 00:01:39.820 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.820 Compiler for C supports arguments -Wformat-security: NO 00:01:39.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.820 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.820 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.820 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.820 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.820 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.820 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.820 Compiler for C supports arguments -Wundef: YES 00:01:39.820 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.820 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.820 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.820 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.820 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.820 Program objdump found: YES (/usr/bin/objdump) 00:01:39.820 Compiler for C supports arguments -mavx512f: YES 00:01:39.820 Checking if "AVX512 checking" compiles: YES 00:01:39.820 Fetching value of define "__SSE4_2__" : 1 00:01:39.820 Fetching value of define "__AES__" : 1 00:01:39.821 Fetching value of define "__AVX__" : 1 00:01:39.821 Fetching value of define "__AVX2__" : (undefined) 00:01:39.821 Fetching value of define "__AVX512BW__" : (undefined) 00:01:39.821 Fetching value of define "__AVX512CD__" : (undefined) 00:01:39.821 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:39.821 Fetching value of define "__AVX512F__" : (undefined) 00:01:39.821 Fetching value of define "__AVX512VL__" : (undefined) 00:01:39.821 Fetching value of define "__PCLMUL__" : 1 00:01:39.821 Fetching value of define "__RDRND__" : 1 00:01:39.821 Fetching value of define "__RDSEED__" : (undefined) 00:01:39.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.821 Fetching value of define "__znver1__" : (undefined) 00:01:39.821 Fetching value of define "__znver2__" : (undefined) 00:01:39.821 Fetching value of define "__znver3__" : (undefined) 00:01:39.821 Fetching value of define "__znver4__" : (undefined) 00:01:39.821 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.821 Message: lib/log: Defining dependency "log" 00:01:39.821 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.821 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.821 Checking for function "getentropy" : NO 00:01:39.821 Message: lib/eal: Defining dependency "eal" 00:01:39.821 Message: lib/ring: Defining dependency "ring" 00:01:39.821 Message: lib/rcu: Defining dependency "rcu" 00:01:39.821 Message: lib/mempool: Defining dependency "mempool" 00:01:39.821 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.821 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.821 Compiler for C supports arguments -mpclmul: YES 00:01:39.821 Compiler for C supports arguments -maes: YES 00:01:39.821 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.821 Compiler for C supports arguments -mavx512bw: YES 00:01:39.821 Compiler for C supports arguments -mavx512dq: YES 00:01:39.821 Compiler for C supports arguments -mavx512vl: YES 00:01:39.821 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.821 Compiler for C supports arguments -mavx2: YES 00:01:39.821 Compiler for C supports arguments -mavx: YES 00:01:39.821 Message: lib/net: Defining dependency "net" 00:01:39.821 Message: lib/meter: Defining dependency "meter" 00:01:39.821 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.821 Message: lib/pci: Defining dependency "pci" 00:01:39.821 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.821 Message: lib/hash: Defining dependency "hash" 00:01:39.821 Message: lib/timer: Defining dependency "timer" 00:01:39.821 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.821 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.821 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.821 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.821 Message: lib/power: Defining dependency "power" 00:01:39.821 Message: lib/reorder: Defining dependency "reorder" 00:01:39.821 Message: lib/security: Defining dependency "security" 00:01:39.821 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:39.821 Message: lib/stack: Defining dependency "stack" 00:01:39.821 Has header "linux/userfaultfd.h" : YES 00:01:39.821 Has header "linux/vduse.h" : YES 00:01:39.821 Message: lib/vhost: Defining dependency "vhost" 00:01:39.821 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.821 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.821 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.821 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.821 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.821 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.821 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.821 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.821 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.821 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.821 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.821 Configuring doxy-api-html.conf using configuration 00:01:39.821 Configuring doxy-api-man.conf using configuration 00:01:39.821 Program mandb found: YES (/usr/bin/mandb) 00:01:39.821 Program sphinx-build found: NO 00:01:39.821 Configuring rte_build_config.h using configuration 00:01:39.821 Message: 00:01:39.821 ================= 00:01:39.821 Applications Enabled 00:01:39.821 ================= 00:01:39.821 00:01:39.821 apps: 00:01:39.821 00:01:39.821 00:01:39.821 Message: 00:01:39.821 ================= 00:01:39.821 Libraries Enabled 00:01:39.821 ================= 00:01:39.821 00:01:39.821 libs: 00:01:39.821 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.821 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.821 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:39.821 00:01:39.821 Message: 00:01:39.821 =============== 00:01:39.821 Drivers Enabled 00:01:39.821 =============== 00:01:39.821 00:01:39.821 common: 00:01:39.821 00:01:39.821 bus: 00:01:39.821 pci, vdev, 00:01:39.821 mempool: 00:01:39.821 ring, 00:01:39.821 dma: 00:01:39.821 00:01:39.821 net: 00:01:39.821 00:01:39.821 crypto: 00:01:39.821 00:01:39.821 compress: 00:01:39.821 00:01:39.821 vdpa: 00:01:39.821 00:01:39.821 00:01:39.821 Message: 00:01:39.821 ================= 00:01:39.821 Content Skipped 00:01:39.821 ================= 00:01:39.821 00:01:39.821 apps: 00:01:39.821 dumpcap: explicitly disabled via build config 00:01:39.821 graph: explicitly disabled via build config 00:01:39.821 pdump: explicitly disabled via build config 00:01:39.821 proc-info: explicitly disabled via build config 00:01:39.821 test-acl: explicitly disabled via build config 00:01:39.821 test-bbdev: explicitly disabled via build config 00:01:39.821 test-cmdline: explicitly disabled via build config 00:01:39.821 test-compress-perf: explicitly disabled via build config 00:01:39.821 test-crypto-perf: explicitly disabled via build config 00:01:39.821 test-dma-perf: explicitly disabled via build config 00:01:39.821 test-eventdev: explicitly disabled via build config 00:01:39.821 test-fib: explicitly disabled via build config 00:01:39.821 test-flow-perf: explicitly disabled via build config 00:01:39.821 test-gpudev: explicitly disabled via build config 00:01:39.821 test-mldev: explicitly disabled via build config 00:01:39.821 test-pipeline: explicitly disabled via build config 00:01:39.821 test-pmd: explicitly disabled via build config 00:01:39.821 test-regex: explicitly disabled via build config 00:01:39.821 test-sad: explicitly disabled via build config 00:01:39.821 test-security-perf: explicitly disabled via build config 00:01:39.821 00:01:39.821 libs: 00:01:39.821 argparse: explicitly disabled via build config 00:01:39.821 metrics: explicitly disabled via build config 00:01:39.821 acl: explicitly disabled via build config 00:01:39.821 bbdev: explicitly disabled via build config 00:01:39.821 bitratestats: explicitly disabled via build config 00:01:39.821 bpf: explicitly disabled via build config 00:01:39.821 cfgfile: explicitly disabled via build config 00:01:39.821 distributor: explicitly disabled via build config 00:01:39.821 efd: explicitly disabled via build config 00:01:39.821 eventdev: explicitly disabled via build config 00:01:39.821 dispatcher: explicitly disabled via build config 00:01:39.821 gpudev: explicitly disabled via build config 00:01:39.821 gro: explicitly disabled via build config 00:01:39.821 gso: explicitly disabled via build config 00:01:39.821 ip_frag: explicitly disabled via build config 00:01:39.821 jobstats: explicitly disabled via build config 00:01:39.821 latencystats: explicitly disabled via build config 00:01:39.821 lpm: explicitly disabled via build config 00:01:39.821 member: explicitly disabled via build config 00:01:39.821 pcapng: explicitly disabled via build config 00:01:39.821 rawdev: explicitly disabled via build config 00:01:39.821 regexdev: explicitly disabled via build config 00:01:39.821 mldev: explicitly disabled via build config 00:01:39.821 rib: explicitly disabled via build config 00:01:39.821 sched: explicitly disabled via build config 00:01:39.821 ipsec: explicitly disabled via build config 00:01:39.821 pdcp: explicitly disabled via build config 00:01:39.821 fib: explicitly disabled via build config 00:01:39.821 port: explicitly disabled via build config 00:01:39.821 pdump: explicitly disabled via build config 00:01:39.821 table: explicitly disabled via build config 00:01:39.821 pipeline: explicitly disabled via build config 00:01:39.821 graph: explicitly disabled via build config 00:01:39.821 node: explicitly disabled via build config 00:01:39.821 00:01:39.821 drivers: 00:01:39.821 common/cpt: not in enabled drivers build config 00:01:39.821 common/dpaax: not in enabled drivers build config 00:01:39.821 common/iavf: not in enabled drivers build config 00:01:39.821 common/idpf: not in enabled drivers build config 00:01:39.821 common/ionic: not in enabled drivers build config 00:01:39.821 common/mvep: not in enabled drivers build config 00:01:39.821 common/octeontx: not in enabled drivers build config 00:01:39.821 bus/auxiliary: not in enabled drivers build config 00:01:39.821 bus/cdx: not in enabled drivers build config 00:01:39.821 bus/dpaa: not in enabled drivers build config 00:01:39.821 bus/fslmc: not in enabled drivers build config 00:01:39.821 bus/ifpga: not in enabled drivers build config 00:01:39.821 bus/platform: not in enabled drivers build config 00:01:39.821 bus/uacce: not in enabled drivers build config 00:01:39.821 bus/vmbus: not in enabled drivers build config 00:01:39.821 common/cnxk: not in enabled drivers build config 00:01:39.821 common/mlx5: not in enabled drivers build config 00:01:39.821 common/nfp: not in enabled drivers build config 00:01:39.821 common/nitrox: not in enabled drivers build config 00:01:39.821 common/qat: not in enabled drivers build config 00:01:39.821 common/sfc_efx: not in enabled drivers build config 00:01:39.821 mempool/bucket: not in enabled drivers build config 00:01:39.821 mempool/cnxk: not in enabled drivers build config 00:01:39.821 mempool/dpaa: not in enabled drivers build config 00:01:39.821 mempool/dpaa2: not in enabled drivers build config 00:01:39.821 mempool/octeontx: not in enabled drivers build config 00:01:39.821 mempool/stack: not in enabled drivers build config 00:01:39.821 dma/cnxk: not in enabled drivers build config 00:01:39.821 dma/dpaa: not in enabled drivers build config 00:01:39.821 dma/dpaa2: not in enabled drivers build config 00:01:39.821 dma/hisilicon: not in enabled drivers build config 00:01:39.822 dma/idxd: not in enabled drivers build config 00:01:39.822 dma/ioat: not in enabled drivers build config 00:01:39.822 dma/skeleton: not in enabled drivers build config 00:01:39.822 net/af_packet: not in enabled drivers build config 00:01:39.822 net/af_xdp: not in enabled drivers build config 00:01:39.822 net/ark: not in enabled drivers build config 00:01:39.822 net/atlantic: not in enabled drivers build config 00:01:39.822 net/avp: not in enabled drivers build config 00:01:39.822 net/axgbe: not in enabled drivers build config 00:01:39.822 net/bnx2x: not in enabled drivers build config 00:01:39.822 net/bnxt: not in enabled drivers build config 00:01:39.822 net/bonding: not in enabled drivers build config 00:01:39.822 net/cnxk: not in enabled drivers build config 00:01:39.822 net/cpfl: not in enabled drivers build config 00:01:39.822 net/cxgbe: not in enabled drivers build config 00:01:39.822 net/dpaa: not in enabled drivers build config 00:01:39.822 net/dpaa2: not in enabled drivers build config 00:01:39.822 net/e1000: not in enabled drivers build config 00:01:39.822 net/ena: not in enabled drivers build config 00:01:39.822 net/enetc: not in enabled drivers build config 00:01:39.822 net/enetfec: not in enabled drivers build config 00:01:39.822 net/enic: not in enabled drivers build config 00:01:39.822 net/failsafe: not in enabled drivers build config 00:01:39.822 net/fm10k: not in enabled drivers build config 00:01:39.822 net/gve: not in enabled drivers build config 00:01:39.822 net/hinic: not in enabled drivers build config 00:01:39.822 net/hns3: not in enabled drivers build config 00:01:39.822 net/i40e: not in enabled drivers build config 00:01:39.822 net/iavf: not in enabled drivers build config 00:01:39.822 net/ice: not in enabled drivers build config 00:01:39.822 net/idpf: not in enabled drivers build config 00:01:39.822 net/igc: not in enabled drivers build config 00:01:39.822 net/ionic: not in enabled drivers build config 00:01:39.822 net/ipn3ke: not in enabled drivers build config 00:01:39.822 net/ixgbe: not in enabled drivers build config 00:01:39.822 net/mana: not in enabled drivers build config 00:01:39.822 net/memif: not in enabled drivers build config 00:01:39.822 net/mlx4: not in enabled drivers build config 00:01:39.822 net/mlx5: not in enabled drivers build config 00:01:39.822 net/mvneta: not in enabled drivers build config 00:01:39.822 net/mvpp2: not in enabled drivers build config 00:01:39.822 net/netvsc: not in enabled drivers build config 00:01:39.822 net/nfb: not in enabled drivers build config 00:01:39.822 net/nfp: not in enabled drivers build config 00:01:39.822 net/ngbe: not in enabled drivers build config 00:01:39.822 net/null: not in enabled drivers build config 00:01:39.822 net/octeontx: not in enabled drivers build config 00:01:39.822 net/octeon_ep: not in enabled drivers build config 00:01:39.822 net/pcap: not in enabled drivers build config 00:01:39.822 net/pfe: not in enabled drivers build config 00:01:39.822 net/qede: not in enabled drivers build config 00:01:39.822 net/ring: not in enabled drivers build config 00:01:39.822 net/sfc: not in enabled drivers build config 00:01:39.822 net/softnic: not in enabled drivers build config 00:01:39.822 net/tap: not in enabled drivers build config 00:01:39.822 net/thunderx: not in enabled drivers build config 00:01:39.822 net/txgbe: not in enabled drivers build config 00:01:39.822 net/vdev_netvsc: not in enabled drivers build config 00:01:39.822 net/vhost: not in enabled drivers build config 00:01:39.822 net/virtio: not in enabled drivers build config 00:01:39.822 net/vmxnet3: not in enabled drivers build config 00:01:39.822 raw/*: missing internal dependency, "rawdev" 00:01:39.822 crypto/armv8: not in enabled drivers build config 00:01:39.822 crypto/bcmfs: not in enabled drivers build config 00:01:39.822 crypto/caam_jr: not in enabled drivers build config 00:01:39.822 crypto/ccp: not in enabled drivers build config 00:01:39.822 crypto/cnxk: not in enabled drivers build config 00:01:39.822 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.822 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.822 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.822 crypto/mlx5: not in enabled drivers build config 00:01:39.822 crypto/mvsam: not in enabled drivers build config 00:01:39.822 crypto/nitrox: not in enabled drivers build config 00:01:39.822 crypto/null: not in enabled drivers build config 00:01:39.822 crypto/octeontx: not in enabled drivers build config 00:01:39.822 crypto/openssl: not in enabled drivers build config 00:01:39.822 crypto/scheduler: not in enabled drivers build config 00:01:39.822 crypto/uadk: not in enabled drivers build config 00:01:39.822 crypto/virtio: not in enabled drivers build config 00:01:39.822 compress/isal: not in enabled drivers build config 00:01:39.822 compress/mlx5: not in enabled drivers build config 00:01:39.822 compress/nitrox: not in enabled drivers build config 00:01:39.822 compress/octeontx: not in enabled drivers build config 00:01:39.822 compress/zlib: not in enabled drivers build config 00:01:39.822 regex/*: missing internal dependency, "regexdev" 00:01:39.822 ml/*: missing internal dependency, "mldev" 00:01:39.822 vdpa/ifc: not in enabled drivers build config 00:01:39.822 vdpa/mlx5: not in enabled drivers build config 00:01:39.822 vdpa/nfp: not in enabled drivers build config 00:01:39.822 vdpa/sfc: not in enabled drivers build config 00:01:39.822 event/*: missing internal dependency, "eventdev" 00:01:39.822 baseband/*: missing internal dependency, "bbdev" 00:01:39.822 gpu/*: missing internal dependency, "gpudev" 00:01:39.822 00:01:39.822 00:01:39.822 Build targets in project: 88 00:01:39.822 00:01:39.822 DPDK 24.03.0 00:01:39.822 00:01:39.822 User defined options 00:01:39.822 buildtype : debug 00:01:39.822 default_library : shared 00:01:39.822 libdir : lib 00:01:39.822 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:39.822 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.822 c_link_args : 00:01:39.822 cpu_instruction_set: native 00:01:39.822 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:39.822 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib,argparse 00:01:39.822 enable_docs : false 00:01:39.822 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.822 enable_kmods : false 00:01:39.822 tests : false 00:01:39.822 00:01:39.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.091 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.091 [1/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.091 [2/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.091 [3/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.091 [4/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.091 [5/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.091 [6/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.091 [7/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.091 [8/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.091 [9/274] Linking static target lib/librte_kvargs.a 00:01:40.091 [10/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.091 [11/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.091 [12/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.091 [13/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.091 [14/274] Linking static target lib/librte_log.a 00:01:40.091 [15/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.349 [16/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.924 [17/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.924 [18/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.924 [19/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.924 [20/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.924 [21/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.924 [22/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.924 [23/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.924 [24/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.924 [25/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.924 [26/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.924 [27/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.924 [28/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.924 [29/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.924 [30/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.188 [31/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.188 [32/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.188 [33/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.188 [34/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.188 [35/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.188 [36/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.188 [37/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.188 [38/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.188 [39/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.188 [40/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.188 [41/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.188 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.188 [43/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.188 [44/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.188 [45/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.188 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.188 [47/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.188 [48/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.188 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.188 [50/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.188 [51/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.188 [52/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.188 [53/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.188 [54/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.188 [55/274] Linking static target lib/librte_telemetry.a 00:01:41.188 [56/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.188 [57/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.188 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.188 [59/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.188 [60/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.188 [61/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.451 [62/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.452 [63/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.452 [64/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.452 [65/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:41.452 [66/274] Linking target lib/librte_log.so.24.1 00:01:41.452 [67/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:41.712 [68/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.712 [69/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.712 [70/274] Linking static target lib/librte_pci.a 00:01:41.712 [71/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:41.974 [72/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.974 [73/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.974 [74/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.974 [75/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.974 [76/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.974 [77/274] Linking target lib/librte_kvargs.so.24.1 00:01:41.974 [78/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:41.974 [79/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:41.974 [80/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.974 [81/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.974 [82/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:41.974 [83/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:41.974 [84/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:41.974 [85/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:41.974 [86/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.974 [87/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:41.974 [88/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.974 [89/274] Linking static target lib/librte_meter.a 00:01:42.238 [90/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.238 [91/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.238 [92/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.238 [93/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.238 [94/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.238 [95/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.238 [96/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:42.238 [97/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.238 [98/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.238 [99/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.238 [100/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.238 [101/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.238 [102/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.238 [103/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.238 [104/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.238 [105/274] Linking static target lib/librte_ring.a 00:01:42.238 [106/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.238 [107/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.238 [108/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.238 [109/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.238 [110/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.238 [111/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.238 [112/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.238 [113/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.238 [114/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.499 [115/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.499 [116/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:42.499 [117/274] Linking static target lib/librte_eal.a 00:01:42.499 [118/274] Linking target lib/librte_telemetry.so.24.1 00:01:42.499 [119/274] Linking static target lib/librte_rcu.a 00:01:42.499 [120/274] Linking static target lib/librte_mempool.a 00:01:42.499 [121/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.499 [122/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.499 [123/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.499 [124/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.499 [125/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.499 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.499 [127/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:42.499 [128/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.762 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.762 [130/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.762 [131/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.762 [132/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.762 [133/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.762 [134/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:42.762 [135/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.762 [136/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.762 [137/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.762 [138/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.027 [139/274] Linking static target lib/librte_net.a 00:01:43.027 [140/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.027 [141/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:43.027 [142/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.027 [143/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.027 [144/274] Linking static target lib/librte_cmdline.a 00:01:43.027 [145/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.027 [146/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.027 [147/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:43.027 [148/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.286 [149/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.286 [150/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:43.286 [151/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:43.286 [152/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:43.286 [153/274] Linking static target lib/librte_timer.a 00:01:43.286 [154/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:43.286 [155/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:43.286 [156/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.286 [157/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:43.286 [158/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:43.286 [159/274] Linking static target lib/librte_dmadev.a 00:01:43.286 [160/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:43.286 [161/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:43.286 [162/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:43.545 [163/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:43.546 [164/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.546 [165/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:43.546 [166/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:43.546 [167/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:43.546 [168/274] Linking static target lib/librte_stack.a 00:01:43.546 [169/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:43.546 [170/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.546 [171/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:43.546 [172/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.546 [173/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:43.546 [174/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:43.546 [175/274] Linking static target lib/librte_power.a 00:01:43.804 [176/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:43.804 [177/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.804 [178/274] Linking static target lib/librte_hash.a 00:01:43.804 [179/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:43.804 [180/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.804 [181/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:43.804 [182/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.804 [183/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:43.804 [184/274] Linking static target lib/librte_compressdev.a 00:01:43.804 [185/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:43.804 [186/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.804 [187/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.063 [188/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.063 [189/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.063 [190/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.063 [191/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.063 [192/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.063 [193/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.063 [194/274] Linking static target lib/librte_mbuf.a 00:01:44.063 [195/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.063 [196/274] Linking static target lib/librte_security.a 00:01:44.063 [197/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.064 [198/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.064 [199/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.064 [200/274] Linking static target lib/librte_reorder.a 00:01:44.322 [201/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.322 [202/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.322 [203/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.322 [204/274] Linking static target drivers/librte_bus_vdev.a 00:01:44.322 [205/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.322 [206/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.322 [207/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.322 [208/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.322 [209/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.322 [210/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.322 [211/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.322 [212/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.322 [213/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.322 [214/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.322 [215/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.322 [216/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.580 [217/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.580 [218/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.580 [219/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.580 [220/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.580 [221/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.580 [222/274] Linking static target drivers/librte_mempool_ring.a 00:01:44.580 [223/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.580 [224/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.580 [225/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.580 [226/274] Linking static target drivers/librte_bus_pci.a 00:01:44.580 [227/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.580 [228/274] Linking static target lib/librte_cryptodev.a 00:01:44.838 [229/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.838 [230/274] Linking static target lib/librte_ethdev.a 00:01:45.096 [231/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.064 [232/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.437 [233/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.336 [234/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.595 [235/274] Linking target lib/librte_eal.so.24.1 00:01:49.595 [236/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:49.595 [237/274] Linking target lib/librte_ring.so.24.1 00:01:49.595 [238/274] Linking target lib/librte_meter.so.24.1 00:01:49.595 [239/274] Linking target lib/librte_timer.so.24.1 00:01:49.595 [240/274] Linking target lib/librte_pci.so.24.1 00:01:49.595 [241/274] Linking target lib/librte_dmadev.so.24.1 00:01:49.595 [242/274] Linking target lib/librte_stack.so.24.1 00:01:49.595 [243/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:49.853 [244/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.853 [245/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:49.853 [246/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:49.853 [247/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:49.853 [248/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:49.853 [249/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:49.853 [250/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:49.853 [251/274] Linking target lib/librte_mempool.so.24.1 00:01:49.853 [252/274] Linking target lib/librte_rcu.so.24.1 00:01:50.111 [253/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:50.111 [254/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:50.111 [255/274] Linking target lib/librte_mbuf.so.24.1 00:01:50.111 [256/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:50.369 [257/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:50.369 [258/274] Linking target lib/librte_reorder.so.24.1 00:01:50.369 [259/274] Linking target lib/librte_compressdev.so.24.1 00:01:50.369 [260/274] Linking target lib/librte_cryptodev.so.24.1 00:01:50.369 [261/274] Linking target lib/librte_net.so.24.1 00:01:50.369 [262/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:50.369 [263/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:50.369 [264/274] Linking target lib/librte_hash.so.24.1 00:01:50.369 [265/274] Linking target lib/librte_cmdline.so.24.1 00:01:50.369 [266/274] Linking target lib/librte_security.so.24.1 00:01:50.629 [267/274] Linking target lib/librte_ethdev.so.24.1 00:01:50.629 [268/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:50.629 [269/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:50.629 [270/274] Linking target lib/librte_power.so.24.1 00:01:54.817 [271/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.818 [272/274] Linking static target lib/librte_vhost.a 00:01:55.754 [273/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.754 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:55.754 INFO: autodetecting backend as ninja 00:01:55.754 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:57.129 CC lib/ut_mock/mock.o 00:01:57.129 CC lib/ut/ut.o 00:01:57.129 CC lib/log/log.o 00:01:57.129 CC lib/log/log_flags.o 00:01:57.129 CC lib/log/log_deprecated.o 00:01:57.129 LIB libspdk_log.a 00:01:57.129 LIB libspdk_ut_mock.a 00:01:57.129 SO libspdk_ut_mock.so.6.0 00:01:57.387 SO libspdk_log.so.7.0 00:01:57.387 LIB libspdk_ut.a 00:01:57.387 SO libspdk_ut.so.2.0 00:01:57.387 SYMLINK libspdk_ut_mock.so 00:01:57.387 SYMLINK libspdk_log.so 00:01:57.387 SYMLINK libspdk_ut.so 00:01:57.387 CXX lib/trace_parser/trace.o 00:01:57.387 CC lib/dma/dma.o 00:01:57.387 CC lib/ioat/ioat.o 00:01:57.387 CC lib/util/base64.o 00:01:57.387 CC lib/util/bit_array.o 00:01:57.387 CC lib/util/cpuset.o 00:01:57.387 CC lib/util/crc16.o 00:01:57.387 CC lib/util/crc32.o 00:01:57.387 CC lib/util/crc32c.o 00:01:57.387 CC lib/util/crc32_ieee.o 00:01:57.387 CC lib/util/crc64.o 00:01:57.387 CC lib/util/dif.o 00:01:57.387 CC lib/util/fd.o 00:01:57.387 CC lib/util/file.o 00:01:57.387 CC lib/util/hexlify.o 00:01:57.387 CC lib/util/iov.o 00:01:57.387 CC lib/util/math.o 00:01:57.387 CC lib/util/pipe.o 00:01:57.387 CC lib/util/strerror_tls.o 00:01:57.387 CC lib/util/string.o 00:01:57.387 CC lib/util/uuid.o 00:01:57.387 CC lib/util/fd_group.o 00:01:57.387 CC lib/util/xor.o 00:01:57.387 CC lib/util/zipf.o 00:01:57.644 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.644 CC lib/vfio_user/host/vfio_user.o 00:01:57.644 LIB libspdk_dma.a 00:01:57.644 SO libspdk_dma.so.4.0 00:01:57.903 SYMLINK libspdk_dma.so 00:01:57.903 LIB libspdk_ioat.a 00:01:57.903 SO libspdk_ioat.so.7.0 00:01:57.903 LIB libspdk_vfio_user.a 00:01:57.903 SYMLINK libspdk_ioat.so 00:01:57.903 SO libspdk_vfio_user.so.5.0 00:01:57.903 SYMLINK libspdk_vfio_user.so 00:01:58.160 LIB libspdk_util.a 00:01:58.160 SO libspdk_util.so.9.0 00:01:58.418 SYMLINK libspdk_util.so 00:01:58.677 CC lib/idxd/idxd.o 00:01:58.677 CC lib/idxd/idxd_user.o 00:01:58.677 CC lib/env_dpdk/env.o 00:01:58.677 CC lib/json/json_parse.o 00:01:58.677 CC lib/conf/conf.o 00:01:58.677 CC lib/env_dpdk/memory.o 00:01:58.677 CC lib/json/json_util.o 00:01:58.677 CC lib/env_dpdk/pci.o 00:01:58.677 CC lib/json/json_write.o 00:01:58.677 CC lib/env_dpdk/init.o 00:01:58.677 CC lib/env_dpdk/threads.o 00:01:58.677 CC lib/vmd/vmd.o 00:01:58.677 CC lib/env_dpdk/pci_ioat.o 00:01:58.677 CC lib/rdma/common.o 00:01:58.677 CC lib/env_dpdk/pci_virtio.o 00:01:58.677 CC lib/rdma/rdma_verbs.o 00:01:58.677 CC lib/vmd/led.o 00:01:58.677 CC lib/env_dpdk/pci_vmd.o 00:01:58.677 CC lib/env_dpdk/pci_idxd.o 00:01:58.677 CC lib/env_dpdk/pci_event.o 00:01:58.677 CC lib/env_dpdk/sigbus_handler.o 00:01:58.677 CC lib/env_dpdk/pci_dpdk.o 00:01:58.677 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:58.677 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:58.677 LIB libspdk_trace_parser.a 00:01:58.677 SO libspdk_trace_parser.so.5.0 00:01:58.677 SYMLINK libspdk_trace_parser.so 00:01:58.935 LIB libspdk_conf.a 00:01:58.935 SO libspdk_conf.so.6.0 00:01:58.935 LIB libspdk_json.a 00:01:58.935 SYMLINK libspdk_conf.so 00:01:58.935 LIB libspdk_rdma.a 00:01:58.935 SO libspdk_json.so.6.0 00:01:58.935 SO libspdk_rdma.so.6.0 00:01:58.935 SYMLINK libspdk_json.so 00:01:58.935 SYMLINK libspdk_rdma.so 00:01:59.193 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.193 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.193 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.193 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.193 LIB libspdk_idxd.a 00:01:59.193 SO libspdk_idxd.so.12.0 00:01:59.193 LIB libspdk_vmd.a 00:01:59.193 SYMLINK libspdk_idxd.so 00:01:59.452 SO libspdk_vmd.so.6.0 00:01:59.452 SYMLINK libspdk_vmd.so 00:01:59.452 LIB libspdk_jsonrpc.a 00:01:59.452 SO libspdk_jsonrpc.so.6.0 00:01:59.711 SYMLINK libspdk_jsonrpc.so 00:01:59.969 CC lib/rpc/rpc.o 00:02:00.226 LIB libspdk_rpc.a 00:02:00.226 SO libspdk_rpc.so.6.0 00:02:00.226 SYMLINK libspdk_rpc.so 00:02:00.484 CC lib/trace/trace.o 00:02:00.484 CC lib/trace/trace_flags.o 00:02:00.484 CC lib/trace/trace_rpc.o 00:02:00.484 CC lib/keyring/keyring.o 00:02:00.484 CC lib/keyring/keyring_rpc.o 00:02:00.484 CC lib/notify/notify.o 00:02:00.484 CC lib/notify/notify_rpc.o 00:02:00.741 LIB libspdk_notify.a 00:02:00.741 SO libspdk_notify.so.6.0 00:02:00.741 LIB libspdk_keyring.a 00:02:00.741 SO libspdk_keyring.so.1.0 00:02:00.741 LIB libspdk_trace.a 00:02:00.741 SYMLINK libspdk_notify.so 00:02:00.741 SO libspdk_trace.so.10.0 00:02:00.741 SYMLINK libspdk_keyring.so 00:02:00.998 SYMLINK libspdk_trace.so 00:02:00.998 LIB libspdk_env_dpdk.a 00:02:00.998 SO libspdk_env_dpdk.so.14.0 00:02:00.998 CC lib/thread/thread.o 00:02:00.998 CC lib/thread/iobuf.o 00:02:00.998 CC lib/sock/sock.o 00:02:00.998 CC lib/sock/sock_rpc.o 00:02:01.256 SYMLINK libspdk_env_dpdk.so 00:02:01.515 LIB libspdk_sock.a 00:02:01.515 SO libspdk_sock.so.9.0 00:02:01.515 SYMLINK libspdk_sock.so 00:02:01.801 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.801 CC lib/nvme/nvme_ctrlr.o 00:02:01.801 CC lib/nvme/nvme_fabric.o 00:02:01.801 CC lib/nvme/nvme_ns_cmd.o 00:02:01.801 CC lib/nvme/nvme_ns.o 00:02:01.801 CC lib/nvme/nvme_pcie_common.o 00:02:01.801 CC lib/nvme/nvme_pcie.o 00:02:01.801 CC lib/nvme/nvme_qpair.o 00:02:01.801 CC lib/nvme/nvme.o 00:02:01.801 CC lib/nvme/nvme_quirks.o 00:02:01.801 CC lib/nvme/nvme_transport.o 00:02:01.801 CC lib/nvme/nvme_discovery.o 00:02:01.801 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.801 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:01.801 CC lib/nvme/nvme_tcp.o 00:02:01.801 CC lib/nvme/nvme_opal.o 00:02:01.801 CC lib/nvme/nvme_io_msg.o 00:02:01.801 CC lib/nvme/nvme_poll_group.o 00:02:01.801 CC lib/nvme/nvme_zns.o 00:02:01.801 CC lib/nvme/nvme_stubs.o 00:02:01.801 CC lib/nvme/nvme_auth.o 00:02:01.801 CC lib/nvme/nvme_cuse.o 00:02:01.801 CC lib/nvme/nvme_rdma.o 00:02:02.738 LIB libspdk_thread.a 00:02:02.738 SO libspdk_thread.so.10.0 00:02:02.996 SYMLINK libspdk_thread.so 00:02:02.996 CC lib/accel/accel.o 00:02:02.996 CC lib/init/json_config.o 00:02:02.996 CC lib/blob/blobstore.o 00:02:02.996 CC lib/accel/accel_rpc.o 00:02:02.996 CC lib/accel/accel_sw.o 00:02:02.996 CC lib/init/subsystem.o 00:02:02.996 CC lib/blob/request.o 00:02:02.996 CC lib/blob/zeroes.o 00:02:02.996 CC lib/init/subsystem_rpc.o 00:02:02.996 CC lib/blob/blob_bs_dev.o 00:02:02.996 CC lib/virtio/virtio.o 00:02:02.996 CC lib/init/rpc.o 00:02:02.996 CC lib/virtio/virtio_vhost_user.o 00:02:02.996 CC lib/virtio/virtio_vfio_user.o 00:02:02.996 CC lib/virtio/virtio_pci.o 00:02:03.253 LIB libspdk_init.a 00:02:03.253 SO libspdk_init.so.5.0 00:02:03.512 LIB libspdk_virtio.a 00:02:03.512 SYMLINK libspdk_init.so 00:02:03.512 SO libspdk_virtio.so.7.0 00:02:03.512 SYMLINK libspdk_virtio.so 00:02:03.512 CC lib/event/app.o 00:02:03.512 CC lib/event/reactor.o 00:02:03.512 CC lib/event/log_rpc.o 00:02:03.512 CC lib/event/app_rpc.o 00:02:03.512 CC lib/event/scheduler_static.o 00:02:04.077 LIB libspdk_event.a 00:02:04.077 SO libspdk_event.so.13.0 00:02:04.335 SYMLINK libspdk_event.so 00:02:04.335 LIB libspdk_accel.a 00:02:04.335 SO libspdk_accel.so.15.0 00:02:04.335 LIB libspdk_nvme.a 00:02:04.335 SYMLINK libspdk_accel.so 00:02:04.593 SO libspdk_nvme.so.13.0 00:02:04.593 CC lib/bdev/bdev.o 00:02:04.593 CC lib/bdev/bdev_rpc.o 00:02:04.593 CC lib/bdev/bdev_zone.o 00:02:04.593 CC lib/bdev/part.o 00:02:04.593 CC lib/bdev/scsi_nvme.o 00:02:04.851 SYMLINK libspdk_nvme.so 00:02:06.750 LIB libspdk_blob.a 00:02:06.750 SO libspdk_blob.so.11.0 00:02:07.009 SYMLINK libspdk_blob.so 00:02:07.009 CC lib/lvol/lvol.o 00:02:07.009 CC lib/blobfs/blobfs.o 00:02:07.009 CC lib/blobfs/tree.o 00:02:07.943 LIB libspdk_bdev.a 00:02:07.943 SO libspdk_bdev.so.15.0 00:02:07.943 SYMLINK libspdk_bdev.so 00:02:08.210 CC lib/nvmf/ctrlr.o 00:02:08.210 CC lib/ublk/ublk.o 00:02:08.210 CC lib/ublk/ublk_rpc.o 00:02:08.210 CC lib/nbd/nbd.o 00:02:08.210 CC lib/nvmf/ctrlr_discovery.o 00:02:08.210 CC lib/nvmf/ctrlr_bdev.o 00:02:08.210 CC lib/scsi/dev.o 00:02:08.210 CC lib/nbd/nbd_rpc.o 00:02:08.210 CC lib/nvmf/subsystem.o 00:02:08.210 CC lib/ftl/ftl_core.o 00:02:08.210 CC lib/nvmf/nvmf.o 00:02:08.210 CC lib/scsi/lun.o 00:02:08.210 CC lib/ftl/ftl_init.o 00:02:08.210 CC lib/nvmf/nvmf_rpc.o 00:02:08.210 CC lib/ftl/ftl_layout.o 00:02:08.210 CC lib/nvmf/transport.o 00:02:08.210 CC lib/scsi/port.o 00:02:08.210 CC lib/scsi/scsi.o 00:02:08.210 CC lib/ftl/ftl_debug.o 00:02:08.210 CC lib/nvmf/tcp.o 00:02:08.210 CC lib/ftl/ftl_io.o 00:02:08.210 CC lib/scsi/scsi_bdev.o 00:02:08.210 CC lib/scsi/scsi_pr.o 00:02:08.210 CC lib/ftl/ftl_sb.o 00:02:08.210 CC lib/nvmf/rdma.o 00:02:08.210 CC lib/ftl/ftl_l2p.o 00:02:08.210 CC lib/scsi/scsi_rpc.o 00:02:08.210 CC lib/scsi/task.o 00:02:08.210 CC lib/ftl/ftl_nv_cache.o 00:02:08.210 CC lib/ftl/ftl_l2p_flat.o 00:02:08.210 CC lib/ftl/ftl_band.o 00:02:08.210 CC lib/ftl/ftl_band_ops.o 00:02:08.210 CC lib/ftl/ftl_writer.o 00:02:08.210 CC lib/ftl/ftl_rq.o 00:02:08.210 CC lib/ftl/ftl_reloc.o 00:02:08.210 CC lib/ftl/ftl_l2p_cache.o 00:02:08.210 CC lib/ftl/ftl_p2l.o 00:02:08.210 LIB libspdk_lvol.a 00:02:08.210 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.210 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.210 LIB libspdk_blobfs.a 00:02:08.210 SO libspdk_lvol.so.10.0 00:02:08.210 SO libspdk_blobfs.so.10.0 00:02:08.471 SYMLINK libspdk_lvol.so 00:02:08.471 SYMLINK libspdk_blobfs.so 00:02:08.471 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.471 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.471 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.471 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.471 CC lib/ftl/utils/ftl_conf.o 00:02:08.471 CC lib/ftl/utils/ftl_md.o 00:02:08.471 CC lib/ftl/utils/ftl_mempool.o 00:02:08.471 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.471 CC lib/ftl/utils/ftl_property.o 00:02:08.731 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.731 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.731 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.731 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.731 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.731 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.731 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.731 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.731 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.731 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.731 CC lib/ftl/base/ftl_base_dev.o 00:02:08.731 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.731 CC lib/ftl/ftl_trace.o 00:02:08.990 LIB libspdk_nbd.a 00:02:08.990 LIB libspdk_scsi.a 00:02:08.990 SO libspdk_nbd.so.7.0 00:02:08.990 SO libspdk_scsi.so.9.0 00:02:08.990 SYMLINK libspdk_nbd.so 00:02:09.248 SYMLINK libspdk_scsi.so 00:02:09.248 LIB libspdk_ublk.a 00:02:09.248 SO libspdk_ublk.so.3.0 00:02:09.248 SYMLINK libspdk_ublk.so 00:02:09.248 CC lib/vhost/vhost.o 00:02:09.248 CC lib/iscsi/conn.o 00:02:09.248 CC lib/iscsi/init_grp.o 00:02:09.248 CC lib/vhost/vhost_rpc.o 00:02:09.248 CC lib/vhost/vhost_scsi.o 00:02:09.248 CC lib/iscsi/iscsi.o 00:02:09.248 CC lib/vhost/vhost_blk.o 00:02:09.248 CC lib/iscsi/md5.o 00:02:09.248 CC lib/iscsi/param.o 00:02:09.248 CC lib/vhost/rte_vhost_user.o 00:02:09.248 CC lib/iscsi/portal_grp.o 00:02:09.248 CC lib/iscsi/tgt_node.o 00:02:09.248 CC lib/iscsi/iscsi_subsystem.o 00:02:09.248 CC lib/iscsi/iscsi_rpc.o 00:02:09.248 CC lib/iscsi/task.o 00:02:09.506 LIB libspdk_ftl.a 00:02:09.764 SO libspdk_ftl.so.9.0 00:02:10.022 SYMLINK libspdk_ftl.so 00:02:10.955 LIB libspdk_vhost.a 00:02:10.955 SO libspdk_vhost.so.8.0 00:02:10.955 LIB libspdk_nvmf.a 00:02:10.955 SYMLINK libspdk_vhost.so 00:02:10.955 SO libspdk_nvmf.so.18.0 00:02:10.955 LIB libspdk_iscsi.a 00:02:10.955 SO libspdk_iscsi.so.8.0 00:02:11.212 SYMLINK libspdk_nvmf.so 00:02:11.470 SYMLINK libspdk_iscsi.so 00:02:11.729 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.729 CC module/accel/error/accel_error.o 00:02:11.729 CC module/accel/error/accel_error_rpc.o 00:02:11.729 CC module/accel/dsa/accel_dsa.o 00:02:11.729 CC module/sock/posix/posix.o 00:02:11.729 CC module/accel/ioat/accel_ioat.o 00:02:11.729 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.729 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.729 CC module/keyring/file/keyring.o 00:02:11.729 CC module/keyring/file/keyring_rpc.o 00:02:11.729 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.729 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.729 CC module/blob/bdev/blob_bdev.o 00:02:11.729 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.729 CC module/accel/iaa/accel_iaa.o 00:02:11.729 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.729 LIB libspdk_env_dpdk_rpc.a 00:02:11.729 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.729 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.987 LIB libspdk_keyring_file.a 00:02:11.987 LIB libspdk_scheduler_gscheduler.a 00:02:11.987 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.987 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.987 SO libspdk_keyring_file.so.1.0 00:02:11.987 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.987 LIB libspdk_accel_error.a 00:02:11.987 LIB libspdk_accel_ioat.a 00:02:11.987 LIB libspdk_accel_iaa.a 00:02:11.987 LIB libspdk_scheduler_dynamic.a 00:02:11.987 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.987 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.987 SO libspdk_accel_error.so.2.0 00:02:11.987 SYMLINK libspdk_keyring_file.so 00:02:11.987 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.987 SO libspdk_accel_iaa.so.3.0 00:02:11.987 SO libspdk_accel_ioat.so.6.0 00:02:11.987 LIB libspdk_accel_dsa.a 00:02:11.987 SO libspdk_accel_dsa.so.5.0 00:02:11.987 SYMLINK libspdk_accel_error.so 00:02:11.987 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.987 LIB libspdk_blob_bdev.a 00:02:11.987 SYMLINK libspdk_accel_ioat.so 00:02:11.987 SYMLINK libspdk_accel_iaa.so 00:02:11.987 SO libspdk_blob_bdev.so.11.0 00:02:11.987 SYMLINK libspdk_accel_dsa.so 00:02:12.245 SYMLINK libspdk_blob_bdev.so 00:02:12.506 CC module/bdev/delay/vbdev_delay.o 00:02:12.506 CC module/bdev/malloc/bdev_malloc.o 00:02:12.506 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.506 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.506 CC module/bdev/nvme/bdev_nvme.o 00:02:12.506 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.506 CC module/bdev/nvme/nvme_rpc.o 00:02:12.506 CC module/bdev/gpt/gpt.o 00:02:12.506 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.506 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.506 CC module/bdev/nvme/vbdev_opal.o 00:02:12.506 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.506 CC module/bdev/null/bdev_null.o 00:02:12.506 CC module/bdev/null/bdev_null_rpc.o 00:02:12.506 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.506 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.506 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.506 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.506 CC module/bdev/error/vbdev_error.o 00:02:12.506 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.506 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.506 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.506 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.506 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.506 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.506 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.506 CC module/bdev/split/vbdev_split.o 00:02:12.506 CC module/bdev/aio/bdev_aio.o 00:02:12.506 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.506 CC module/bdev/ftl/bdev_ftl.o 00:02:12.506 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.506 CC module/bdev/raid/bdev_raid.o 00:02:12.506 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.506 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.506 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.506 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.506 CC module/bdev/raid/raid0.o 00:02:12.506 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.506 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.506 CC module/bdev/raid/raid1.o 00:02:12.506 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.506 CC module/bdev/raid/concat.o 00:02:12.765 LIB libspdk_sock_posix.a 00:02:12.765 SO libspdk_sock_posix.so.6.0 00:02:12.765 LIB libspdk_bdev_split.a 00:02:12.765 LIB libspdk_bdev_null.a 00:02:12.765 LIB libspdk_blobfs_bdev.a 00:02:12.765 SO libspdk_bdev_split.so.6.0 00:02:12.765 SO libspdk_bdev_null.so.6.0 00:02:12.765 SO libspdk_blobfs_bdev.so.6.0 00:02:12.765 SYMLINK libspdk_sock_posix.so 00:02:12.765 LIB libspdk_bdev_ftl.a 00:02:12.765 SYMLINK libspdk_bdev_split.so 00:02:12.765 SO libspdk_bdev_ftl.so.6.0 00:02:12.765 LIB libspdk_bdev_gpt.a 00:02:12.765 SYMLINK libspdk_bdev_null.so 00:02:12.765 SO libspdk_bdev_gpt.so.6.0 00:02:13.023 LIB libspdk_bdev_passthru.a 00:02:13.023 SYMLINK libspdk_blobfs_bdev.so 00:02:13.023 SYMLINK libspdk_bdev_ftl.so 00:02:13.023 SO libspdk_bdev_passthru.so.6.0 00:02:13.023 SYMLINK libspdk_bdev_gpt.so 00:02:13.023 LIB libspdk_bdev_error.a 00:02:13.023 LIB libspdk_bdev_zone_block.a 00:02:13.023 SO libspdk_bdev_error.so.6.0 00:02:13.023 SYMLINK libspdk_bdev_passthru.so 00:02:13.023 SO libspdk_bdev_zone_block.so.6.0 00:02:13.023 LIB libspdk_bdev_malloc.a 00:02:13.023 LIB libspdk_bdev_aio.a 00:02:13.023 LIB libspdk_bdev_delay.a 00:02:13.023 SYMLINK libspdk_bdev_error.so 00:02:13.023 LIB libspdk_bdev_iscsi.a 00:02:13.023 SO libspdk_bdev_malloc.so.6.0 00:02:13.023 SO libspdk_bdev_aio.so.6.0 00:02:13.023 SYMLINK libspdk_bdev_zone_block.so 00:02:13.023 SO libspdk_bdev_delay.so.6.0 00:02:13.024 SO libspdk_bdev_iscsi.so.6.0 00:02:13.024 LIB libspdk_bdev_lvol.a 00:02:13.024 SYMLINK libspdk_bdev_malloc.so 00:02:13.024 SYMLINK libspdk_bdev_aio.so 00:02:13.024 LIB libspdk_bdev_virtio.a 00:02:13.024 SYMLINK libspdk_bdev_delay.so 00:02:13.024 SO libspdk_bdev_lvol.so.6.0 00:02:13.024 SYMLINK libspdk_bdev_iscsi.so 00:02:13.024 SO libspdk_bdev_virtio.so.6.0 00:02:13.282 SYMLINK libspdk_bdev_lvol.so 00:02:13.282 SYMLINK libspdk_bdev_virtio.so 00:02:13.925 LIB libspdk_bdev_raid.a 00:02:13.925 SO libspdk_bdev_raid.so.6.0 00:02:13.925 SYMLINK libspdk_bdev_raid.so 00:02:15.298 LIB libspdk_bdev_nvme.a 00:02:15.298 SO libspdk_bdev_nvme.so.7.0 00:02:15.298 SYMLINK libspdk_bdev_nvme.so 00:02:15.906 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.906 CC module/event/subsystems/vmd/vmd.o 00:02:15.906 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.906 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.906 CC module/event/subsystems/keyring/keyring.o 00:02:15.906 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.906 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.906 CC module/event/subsystems/sock/sock.o 00:02:15.906 LIB libspdk_event_keyring.a 00:02:15.906 LIB libspdk_event_vhost_blk.a 00:02:15.906 LIB libspdk_event_sock.a 00:02:15.906 SO libspdk_event_keyring.so.1.0 00:02:15.906 SO libspdk_event_vhost_blk.so.3.0 00:02:15.906 LIB libspdk_event_vmd.a 00:02:15.907 LIB libspdk_event_scheduler.a 00:02:15.907 LIB libspdk_event_iobuf.a 00:02:15.907 SO libspdk_event_sock.so.5.0 00:02:15.907 SO libspdk_event_scheduler.so.4.0 00:02:15.907 SO libspdk_event_vmd.so.6.0 00:02:15.907 SO libspdk_event_iobuf.so.3.0 00:02:15.907 SYMLINK libspdk_event_vhost_blk.so 00:02:15.907 SYMLINK libspdk_event_keyring.so 00:02:16.165 SYMLINK libspdk_event_sock.so 00:02:16.165 SYMLINK libspdk_event_scheduler.so 00:02:16.165 SYMLINK libspdk_event_vmd.so 00:02:16.165 SYMLINK libspdk_event_iobuf.so 00:02:16.422 CC module/event/subsystems/accel/accel.o 00:02:16.680 LIB libspdk_event_accel.a 00:02:16.680 SO libspdk_event_accel.so.6.0 00:02:16.680 SYMLINK libspdk_event_accel.so 00:02:16.938 CC module/event/subsystems/bdev/bdev.o 00:02:17.196 LIB libspdk_event_bdev.a 00:02:17.196 SO libspdk_event_bdev.so.6.0 00:02:17.196 SYMLINK libspdk_event_bdev.so 00:02:17.454 CC module/event/subsystems/nbd/nbd.o 00:02:17.454 CC module/event/subsystems/scsi/scsi.o 00:02:17.454 CC module/event/subsystems/ublk/ublk.o 00:02:17.454 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.454 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.454 LIB libspdk_event_nbd.a 00:02:17.712 LIB libspdk_event_scsi.a 00:02:17.712 LIB libspdk_event_ublk.a 00:02:17.712 SO libspdk_event_nbd.so.6.0 00:02:17.712 SO libspdk_event_scsi.so.6.0 00:02:17.712 SO libspdk_event_ublk.so.3.0 00:02:17.712 SYMLINK libspdk_event_nbd.so 00:02:17.712 SYMLINK libspdk_event_scsi.so 00:02:17.712 LIB libspdk_event_nvmf.a 00:02:17.712 SYMLINK libspdk_event_ublk.so 00:02:17.712 SO libspdk_event_nvmf.so.6.0 00:02:17.712 SYMLINK libspdk_event_nvmf.so 00:02:17.970 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.970 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.970 LIB libspdk_event_vhost_scsi.a 00:02:17.970 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.970 LIB libspdk_event_iscsi.a 00:02:18.228 SO libspdk_event_iscsi.so.6.0 00:02:18.228 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.228 SYMLINK libspdk_event_iscsi.so 00:02:18.228 SO libspdk.so.6.0 00:02:18.228 SYMLINK libspdk.so 00:02:18.491 CC app/trace_record/trace_record.o 00:02:18.491 CXX app/trace/trace.o 00:02:18.491 CC app/spdk_nvme_perf/perf.o 00:02:18.491 CC app/spdk_top/spdk_top.o 00:02:18.491 CC app/spdk_lspci/spdk_lspci.o 00:02:18.491 CC app/spdk_nvme_identify/identify.o 00:02:18.491 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.491 TEST_HEADER include/spdk/accel.h 00:02:18.491 TEST_HEADER include/spdk/accel_module.h 00:02:18.491 CC test/rpc_client/rpc_client_test.o 00:02:18.491 TEST_HEADER include/spdk/assert.h 00:02:18.491 TEST_HEADER include/spdk/barrier.h 00:02:18.491 TEST_HEADER include/spdk/base64.h 00:02:18.491 TEST_HEADER include/spdk/bdev.h 00:02:18.491 TEST_HEADER include/spdk/bdev_module.h 00:02:18.491 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.751 TEST_HEADER include/spdk/bit_array.h 00:02:18.751 TEST_HEADER include/spdk/bit_pool.h 00:02:18.751 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.751 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.751 TEST_HEADER include/spdk/blobfs.h 00:02:18.751 TEST_HEADER include/spdk/blob.h 00:02:18.751 TEST_HEADER include/spdk/conf.h 00:02:18.751 TEST_HEADER include/spdk/config.h 00:02:18.751 CC app/nvmf_tgt/nvmf_main.o 00:02:18.751 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.751 TEST_HEADER include/spdk/cpuset.h 00:02:18.751 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.751 TEST_HEADER include/spdk/crc16.h 00:02:18.751 CC app/spdk_dd/spdk_dd.o 00:02:18.751 TEST_HEADER include/spdk/crc32.h 00:02:18.751 TEST_HEADER include/spdk/crc64.h 00:02:18.751 TEST_HEADER include/spdk/dif.h 00:02:18.751 TEST_HEADER include/spdk/dma.h 00:02:18.751 TEST_HEADER include/spdk/endian.h 00:02:18.751 CC app/vhost/vhost.o 00:02:18.751 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.751 TEST_HEADER include/spdk/env.h 00:02:18.751 TEST_HEADER include/spdk/event.h 00:02:18.751 TEST_HEADER include/spdk/fd_group.h 00:02:18.751 TEST_HEADER include/spdk/fd.h 00:02:18.751 TEST_HEADER include/spdk/file.h 00:02:18.751 TEST_HEADER include/spdk/ftl.h 00:02:18.751 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.751 CC app/spdk_tgt/spdk_tgt.o 00:02:18.751 TEST_HEADER include/spdk/hexlify.h 00:02:18.751 TEST_HEADER include/spdk/histogram_data.h 00:02:18.751 TEST_HEADER include/spdk/idxd.h 00:02:18.751 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.751 TEST_HEADER include/spdk/init.h 00:02:18.752 CC examples/util/zipf/zipf.o 00:02:18.752 CC app/fio/nvme/fio_plugin.o 00:02:18.752 TEST_HEADER include/spdk/ioat.h 00:02:18.752 CC examples/nvme/hello_world/hello_world.o 00:02:18.752 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.752 CC examples/sock/hello_world/hello_sock.o 00:02:18.752 CC examples/nvme/hotplug/hotplug.o 00:02:18.752 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.752 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.752 CC examples/nvme/reconnect/reconnect.o 00:02:18.752 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.752 CC examples/vmd/led/led.o 00:02:18.752 CC examples/nvme/abort/abort.o 00:02:18.752 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.752 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.752 CC examples/accel/perf/accel_perf.o 00:02:18.752 CC test/event/event_perf/event_perf.o 00:02:18.752 CC examples/ioat/perf/perf.o 00:02:18.752 CC examples/idxd/perf/perf.o 00:02:18.752 CC examples/nvme/arbitration/arbitration.o 00:02:18.752 TEST_HEADER include/spdk/json.h 00:02:18.752 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.752 TEST_HEADER include/spdk/keyring.h 00:02:18.752 CC test/nvme/aer/aer.o 00:02:18.752 TEST_HEADER include/spdk/keyring_module.h 00:02:18.752 CC test/thread/poller_perf/poller_perf.o 00:02:18.752 TEST_HEADER include/spdk/likely.h 00:02:18.752 TEST_HEADER include/spdk/log.h 00:02:18.752 TEST_HEADER include/spdk/lvol.h 00:02:18.752 TEST_HEADER include/spdk/memory.h 00:02:18.752 TEST_HEADER include/spdk/mmio.h 00:02:18.752 TEST_HEADER include/spdk/nbd.h 00:02:18.752 CC examples/bdev/hello_world/hello_bdev.o 00:02:18.752 TEST_HEADER include/spdk/notify.h 00:02:18.752 TEST_HEADER include/spdk/nvme.h 00:02:18.752 CC app/fio/bdev/fio_plugin.o 00:02:18.752 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.752 CC examples/nvmf/nvmf/nvmf.o 00:02:18.752 CC examples/bdev/bdevperf/bdevperf.o 00:02:18.752 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.752 CC test/accel/dif/dif.o 00:02:18.752 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.752 CC examples/thread/thread/thread_ex.o 00:02:18.752 CC test/blobfs/mkfs/mkfs.o 00:02:18.752 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.752 CC examples/blob/cli/blobcli.o 00:02:18.752 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.752 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.752 CC test/bdev/bdevio/bdevio.o 00:02:18.752 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.752 CC test/app/bdev_svc/bdev_svc.o 00:02:18.752 CC examples/blob/hello_world/hello_blob.o 00:02:18.752 CC test/dma/test_dma/test_dma.o 00:02:18.752 TEST_HEADER include/spdk/nvmf.h 00:02:18.752 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.752 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.752 TEST_HEADER include/spdk/opal.h 00:02:18.752 TEST_HEADER include/spdk/opal_spec.h 00:02:18.752 TEST_HEADER include/spdk/pci_ids.h 00:02:18.752 LINK spdk_lspci 00:02:18.752 TEST_HEADER include/spdk/pipe.h 00:02:18.752 TEST_HEADER include/spdk/queue.h 00:02:19.016 TEST_HEADER include/spdk/reduce.h 00:02:19.016 TEST_HEADER include/spdk/rpc.h 00:02:19.016 TEST_HEADER include/spdk/scheduler.h 00:02:19.016 TEST_HEADER include/spdk/scsi.h 00:02:19.016 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.016 TEST_HEADER include/spdk/sock.h 00:02:19.016 CC test/lvol/esnap/esnap.o 00:02:19.016 TEST_HEADER include/spdk/stdinc.h 00:02:19.016 TEST_HEADER include/spdk/string.h 00:02:19.016 TEST_HEADER include/spdk/thread.h 00:02:19.016 TEST_HEADER include/spdk/trace.h 00:02:19.016 TEST_HEADER include/spdk/trace_parser.h 00:02:19.016 TEST_HEADER include/spdk/tree.h 00:02:19.016 TEST_HEADER include/spdk/ublk.h 00:02:19.016 TEST_HEADER include/spdk/util.h 00:02:19.016 TEST_HEADER include/spdk/uuid.h 00:02:19.016 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.016 TEST_HEADER include/spdk/version.h 00:02:19.016 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.016 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.016 TEST_HEADER include/spdk/vhost.h 00:02:19.016 TEST_HEADER include/spdk/vmd.h 00:02:19.016 TEST_HEADER include/spdk/xor.h 00:02:19.016 TEST_HEADER include/spdk/zipf.h 00:02:19.016 CXX test/cpp_headers/accel.o 00:02:19.016 LINK lsvmd 00:02:19.016 LINK rpc_client_test 00:02:19.016 LINK interrupt_tgt 00:02:19.016 LINK spdk_nvme_discover 00:02:19.016 LINK event_perf 00:02:19.016 LINK led 00:02:19.016 LINK nvmf_tgt 00:02:19.016 LINK vhost 00:02:19.016 LINK zipf 00:02:19.016 LINK poller_perf 00:02:19.016 LINK cmb_copy 00:02:19.016 LINK spdk_trace_record 00:02:19.016 LINK iscsi_tgt 00:02:19.016 LINK pmr_persistence 00:02:19.278 LINK ioat_perf 00:02:19.278 LINK hello_world 00:02:19.278 LINK spdk_tgt 00:02:19.278 LINK hello_sock 00:02:19.278 LINK mkfs 00:02:19.278 LINK bdev_svc 00:02:19.278 LINK hotplug 00:02:19.278 LINK hello_bdev 00:02:19.278 LINK thread 00:02:19.278 LINK hello_blob 00:02:19.278 LINK aer 00:02:19.278 CXX test/cpp_headers/accel_module.o 00:02:19.278 CC examples/ioat/verify/verify.o 00:02:19.278 CXX test/cpp_headers/assert.o 00:02:19.543 LINK idxd_perf 00:02:19.543 LINK nvmf 00:02:19.543 LINK spdk_dd 00:02:19.543 LINK arbitration 00:02:19.543 LINK reconnect 00:02:19.543 LINK spdk_trace 00:02:19.543 LINK abort 00:02:19.543 CXX test/cpp_headers/barrier.o 00:02:19.543 LINK dif 00:02:19.543 LINK test_dma 00:02:19.543 CC test/env/vtophys/vtophys.o 00:02:19.543 CC test/event/reactor/reactor.o 00:02:19.543 CC test/nvme/reset/reset.o 00:02:19.543 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.543 CC test/env/memory/memory_ut.o 00:02:19.543 LINK accel_perf 00:02:19.543 LINK bdevio 00:02:19.810 CC test/app/histogram_perf/histogram_perf.o 00:02:19.810 CC test/app/stub/stub.o 00:02:19.810 CC test/nvme/sgl/sgl.o 00:02:19.810 CC test/app/jsoncat/jsoncat.o 00:02:19.810 CXX test/cpp_headers/base64.o 00:02:19.810 CC test/event/reactor_perf/reactor_perf.o 00:02:19.810 CC test/event/app_repeat/app_repeat.o 00:02:19.810 CC test/nvme/e2edp/nvme_dp.o 00:02:19.810 CXX test/cpp_headers/bdev.o 00:02:19.810 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:19.811 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.811 LINK nvme_manage 00:02:19.811 CXX test/cpp_headers/bdev_module.o 00:02:19.811 CXX test/cpp_headers/bdev_zone.o 00:02:19.811 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.811 CXX test/cpp_headers/bit_array.o 00:02:19.811 LINK spdk_bdev 00:02:19.811 LINK spdk_nvme 00:02:19.811 CC test/env/pci/pci_ut.o 00:02:19.811 CXX test/cpp_headers/bit_pool.o 00:02:19.811 LINK blobcli 00:02:19.811 CXX test/cpp_headers/blob_bdev.o 00:02:19.811 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.811 LINK verify 00:02:19.811 CC test/event/scheduler/scheduler.o 00:02:19.811 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.811 CC test/nvme/overhead/overhead.o 00:02:19.811 CC test/nvme/err_injection/err_injection.o 00:02:19.811 CC test/nvme/reserve/reserve.o 00:02:19.811 CC test/nvme/startup/startup.o 00:02:19.811 LINK vtophys 00:02:20.070 CC test/nvme/simple_copy/simple_copy.o 00:02:20.070 LINK reactor 00:02:20.070 CXX test/cpp_headers/blobfs.o 00:02:20.070 LINK env_dpdk_post_init 00:02:20.070 LINK histogram_perf 00:02:20.070 LINK reactor_perf 00:02:20.070 LINK jsoncat 00:02:20.070 CXX test/cpp_headers/blob.o 00:02:20.070 CXX test/cpp_headers/conf.o 00:02:20.070 CC test/nvme/connect_stress/connect_stress.o 00:02:20.070 LINK stub 00:02:20.070 CC test/nvme/boot_partition/boot_partition.o 00:02:20.070 CXX test/cpp_headers/config.o 00:02:20.070 LINK app_repeat 00:02:20.070 CXX test/cpp_headers/cpuset.o 00:02:20.070 LINK reset 00:02:20.070 CXX test/cpp_headers/crc16.o 00:02:20.070 CXX test/cpp_headers/crc32.o 00:02:20.070 CXX test/cpp_headers/crc64.o 00:02:20.331 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.331 CC test/nvme/compliance/nvme_compliance.o 00:02:20.331 LINK mem_callbacks 00:02:20.331 CXX test/cpp_headers/dif.o 00:02:20.331 CXX test/cpp_headers/dma.o 00:02:20.331 LINK sgl 00:02:20.331 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.331 CXX test/cpp_headers/endian.o 00:02:20.331 LINK spdk_nvme_perf 00:02:20.331 CXX test/cpp_headers/env_dpdk.o 00:02:20.331 CXX test/cpp_headers/env.o 00:02:20.331 LINK spdk_nvme_identify 00:02:20.331 CXX test/cpp_headers/event.o 00:02:20.331 CC test/nvme/fdp/fdp.o 00:02:20.331 CC test/nvme/cuse/cuse.o 00:02:20.331 CXX test/cpp_headers/fd_group.o 00:02:20.331 LINK startup 00:02:20.331 CXX test/cpp_headers/fd.o 00:02:20.331 LINK nvme_dp 00:02:20.331 CXX test/cpp_headers/file.o 00:02:20.331 LINK reserve 00:02:20.331 CXX test/cpp_headers/ftl.o 00:02:20.331 LINK scheduler 00:02:20.331 CXX test/cpp_headers/gpt_spec.o 00:02:20.331 LINK err_injection 00:02:20.331 CXX test/cpp_headers/hexlify.o 00:02:20.331 LINK spdk_top 00:02:20.331 CXX test/cpp_headers/histogram_data.o 00:02:20.331 CXX test/cpp_headers/idxd.o 00:02:20.331 LINK bdevperf 00:02:20.331 CXX test/cpp_headers/idxd_spec.o 00:02:20.331 CXX test/cpp_headers/init.o 00:02:20.331 LINK boot_partition 00:02:20.331 CXX test/cpp_headers/ioat.o 00:02:20.331 LINK simple_copy 00:02:20.331 CXX test/cpp_headers/ioat_spec.o 00:02:20.597 CXX test/cpp_headers/iscsi_spec.o 00:02:20.597 LINK connect_stress 00:02:20.597 CXX test/cpp_headers/json.o 00:02:20.597 CXX test/cpp_headers/jsonrpc.o 00:02:20.597 CXX test/cpp_headers/keyring.o 00:02:20.597 LINK overhead 00:02:20.597 LINK nvme_fuzz 00:02:20.597 CXX test/cpp_headers/keyring_module.o 00:02:20.597 CXX test/cpp_headers/likely.o 00:02:20.597 LINK fused_ordering 00:02:20.597 CXX test/cpp_headers/log.o 00:02:20.597 CXX test/cpp_headers/lvol.o 00:02:20.597 CXX test/cpp_headers/memory.o 00:02:20.597 LINK pci_ut 00:02:20.597 CXX test/cpp_headers/mmio.o 00:02:20.597 CXX test/cpp_headers/nbd.o 00:02:20.597 CXX test/cpp_headers/notify.o 00:02:20.597 CXX test/cpp_headers/nvme.o 00:02:20.597 CXX test/cpp_headers/nvme_intel.o 00:02:20.597 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.597 LINK doorbell_aers 00:02:20.597 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.597 CXX test/cpp_headers/nvme_spec.o 00:02:20.597 CXX test/cpp_headers/nvme_zns.o 00:02:20.856 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.856 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.856 CXX test/cpp_headers/nvmf.o 00:02:20.856 CXX test/cpp_headers/nvmf_spec.o 00:02:20.856 CXX test/cpp_headers/nvmf_transport.o 00:02:20.856 CXX test/cpp_headers/opal.o 00:02:20.856 LINK vhost_fuzz 00:02:20.856 CXX test/cpp_headers/opal_spec.o 00:02:20.856 CXX test/cpp_headers/pci_ids.o 00:02:20.856 CXX test/cpp_headers/pipe.o 00:02:20.856 CXX test/cpp_headers/queue.o 00:02:20.856 CXX test/cpp_headers/reduce.o 00:02:20.856 CXX test/cpp_headers/rpc.o 00:02:20.856 CXX test/cpp_headers/scheduler.o 00:02:20.856 CXX test/cpp_headers/scsi.o 00:02:20.856 CXX test/cpp_headers/scsi_spec.o 00:02:20.856 CXX test/cpp_headers/sock.o 00:02:20.856 CXX test/cpp_headers/stdinc.o 00:02:20.856 CXX test/cpp_headers/string.o 00:02:20.856 CXX test/cpp_headers/thread.o 00:02:20.856 CXX test/cpp_headers/trace.o 00:02:20.856 CXX test/cpp_headers/trace_parser.o 00:02:20.856 CXX test/cpp_headers/tree.o 00:02:20.856 LINK nvme_compliance 00:02:20.856 CXX test/cpp_headers/ublk.o 00:02:20.856 CXX test/cpp_headers/util.o 00:02:20.856 CXX test/cpp_headers/uuid.o 00:02:20.856 CXX test/cpp_headers/version.o 00:02:20.856 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.856 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.856 CXX test/cpp_headers/vhost.o 00:02:20.856 CXX test/cpp_headers/vmd.o 00:02:21.117 LINK fdp 00:02:21.117 CXX test/cpp_headers/xor.o 00:02:21.117 CXX test/cpp_headers/zipf.o 00:02:21.375 LINK memory_ut 00:02:21.941 LINK cuse 00:02:22.507 LINK iscsi_fuzz 00:02:25.788 LINK esnap 00:02:26.354 00:02:26.354 real 0m56.655s 00:02:26.354 user 10m42.471s 00:02:26.354 sys 2m30.976s 00:02:26.354 13:31:28 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:26.354 13:31:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.354 ************************************ 00:02:26.354 END TEST make 00:02:26.354 ************************************ 00:02:26.354 13:31:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.354 13:31:29 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:26.354 13:31:29 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:26.354 13:31:29 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.354 13:31:29 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.354 13:31:29 -- pm/common@45 -- $ pid=924067 00:02:26.354 13:31:29 -- pm/common@52 -- $ sudo kill -TERM 924067 00:02:26.354 13:31:29 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.354 13:31:29 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.354 13:31:29 -- pm/common@45 -- $ pid=924065 00:02:26.354 13:31:29 -- pm/common@52 -- $ sudo kill -TERM 924065 00:02:26.354 13:31:29 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.354 13:31:29 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.354 13:31:29 -- pm/common@45 -- $ pid=924068 00:02:26.354 13:31:29 -- pm/common@52 -- $ sudo kill -TERM 924068 00:02:26.354 13:31:29 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.354 13:31:29 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.354 13:31:29 -- pm/common@45 -- $ pid=924066 00:02:26.354 13:31:29 -- pm/common@52 -- $ sudo kill -TERM 924066 00:02:26.612 13:31:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.612 13:31:29 -- nvmf/common.sh@7 -- # uname -s 00:02:26.612 13:31:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.612 13:31:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.612 13:31:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.612 13:31:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.612 13:31:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.612 13:31:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.612 13:31:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.613 13:31:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.613 13:31:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.613 13:31:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.613 13:31:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:02:26.613 13:31:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:02:26.613 13:31:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.613 13:31:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.613 13:31:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:26.613 13:31:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.613 13:31:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:26.613 13:31:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.613 13:31:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.613 13:31:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.613 13:31:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.613 13:31:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.613 13:31:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.613 13:31:29 -- paths/export.sh@5 -- # export PATH 00:02:26.613 13:31:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.613 13:31:29 -- nvmf/common.sh@47 -- # : 0 00:02:26.613 13:31:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:26.613 13:31:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:26.613 13:31:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.613 13:31:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.613 13:31:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.613 13:31:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:26.613 13:31:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:26.613 13:31:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:26.613 13:31:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.613 13:31:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.613 13:31:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.613 13:31:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.613 13:31:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:26.613 13:31:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.613 13:31:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:26.613 13:31:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.613 13:31:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.613 13:31:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.613 13:31:29 -- spdk/autotest.sh@48 -- # udevadm_pid=980835 00:02:26.613 13:31:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.613 13:31:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.613 13:31:29 -- pm/common@17 -- # local monitor 00:02:26.613 13:31:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.613 13:31:29 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=980837 00:02:26.613 13:31:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.613 13:31:29 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=980839 00:02:26.613 13:31:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.613 13:31:29 -- pm/common@21 -- # date +%s 00:02:26.613 13:31:29 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=980841 00:02:26.613 13:31:29 -- pm/common@21 -- # date +%s 00:02:26.613 13:31:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.613 13:31:29 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=980847 00:02:26.613 13:31:29 -- pm/common@21 -- # date +%s 00:02:26.613 13:31:29 -- pm/common@26 -- # sleep 1 00:02:26.613 13:31:29 -- pm/common@21 -- # date +%s 00:02:26.613 13:31:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439889 00:02:26.613 13:31:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439889 00:02:26.613 13:31:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439889 00:02:26.613 13:31:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439889 00:02:26.613 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439889_collect-bmc-pm.bmc.pm.log 00:02:26.613 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439889_collect-vmstat.pm.log 00:02:26.613 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439889_collect-cpu-load.pm.log 00:02:26.613 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439889_collect-cpu-temp.pm.log 00:02:27.546 13:31:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.546 13:31:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.546 13:31:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:27.546 13:31:30 -- common/autotest_common.sh@10 -- # set +x 00:02:27.546 13:31:30 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.546 13:31:30 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:27.546 13:31:30 -- common/autotest_common.sh@10 -- # set +x 00:02:27.546 13:31:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:27.546 13:31:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.546 13:31:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.546 13:31:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:27.546 13:31:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.546 13:31:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.546 13:31:30 -- common/autotest_common.sh@1441 -- # uname 00:02:27.546 13:31:30 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:27.546 13:31:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.546 13:31:30 -- common/autotest_common.sh@1461 -- # uname 00:02:27.546 13:31:30 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:27.546 13:31:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.546 13:31:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:27.546 13:31:30 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.546 13:31:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:27.546 13:31:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:27.546 --rc lcov_branch_coverage=1 00:02:27.546 --rc lcov_function_coverage=1 00:02:27.546 --rc genhtml_branch_coverage=1 00:02:27.546 --rc genhtml_function_coverage=1 00:02:27.546 --rc genhtml_legend=1 00:02:27.546 --rc geninfo_all_blocks=1 00:02:27.546 ' 00:02:27.546 13:31:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:27.546 --rc lcov_branch_coverage=1 00:02:27.546 --rc lcov_function_coverage=1 00:02:27.546 --rc genhtml_branch_coverage=1 00:02:27.546 --rc genhtml_function_coverage=1 00:02:27.546 --rc genhtml_legend=1 00:02:27.546 --rc geninfo_all_blocks=1 00:02:27.546 ' 00:02:27.546 13:31:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:27.546 --rc lcov_branch_coverage=1 00:02:27.546 --rc lcov_function_coverage=1 00:02:27.546 --rc genhtml_branch_coverage=1 00:02:27.546 --rc genhtml_function_coverage=1 00:02:27.546 --rc genhtml_legend=1 00:02:27.546 --rc geninfo_all_blocks=1 00:02:27.546 --no-external' 00:02:27.546 13:31:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:27.546 --rc lcov_branch_coverage=1 00:02:27.546 --rc lcov_function_coverage=1 00:02:27.546 --rc genhtml_branch_coverage=1 00:02:27.546 --rc genhtml_function_coverage=1 00:02:27.546 --rc genhtml_legend=1 00:02:27.546 --rc geninfo_all_blocks=1 00:02:27.546 --no-external' 00:02:27.546 13:31:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:27.804 lcov: LCOV version 1.14 00:02:27.804 13:31:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:40.026 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:40.026 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:41.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:41.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:41.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:41.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:41.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:41.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:59.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:59.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:59.499 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:59.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:59.500 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:59.500 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:59.500 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:59.500 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:59.500 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:59.500 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:01.399 13:32:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:01.399 13:32:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:01.399 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:03:01.399 13:32:03 -- spdk/autotest.sh@91 -- # rm -f 00:03:01.399 13:32:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.775 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:03:02.775 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:02.775 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:02.775 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:02.775 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:02.775 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:02.775 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:02.775 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:03.033 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:03.033 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:03.033 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:03.033 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:03.033 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:03.033 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:03.033 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:03.033 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:03.033 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:03.033 13:32:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:03.033 13:32:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:03.033 13:32:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:03.033 13:32:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:03.033 13:32:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:03.033 13:32:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:03.033 13:32:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:03.033 13:32:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.033 13:32:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:03.033 13:32:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:03.033 13:32:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:03.033 13:32:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:03.033 13:32:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:03.033 13:32:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:03.033 13:32:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:03.292 No valid GPT data, bailing 00:03:03.292 13:32:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:03.292 13:32:05 -- scripts/common.sh@391 -- # pt= 00:03:03.292 13:32:05 -- scripts/common.sh@392 -- # return 1 00:03:03.292 13:32:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:03.292 1+0 records in 00:03:03.292 1+0 records out 00:03:03.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040062 s, 262 MB/s 00:03:03.292 13:32:05 -- spdk/autotest.sh@118 -- # sync 00:03:03.292 13:32:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:03.292 13:32:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:03.292 13:32:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:05.820 13:32:08 -- spdk/autotest.sh@124 -- # uname -s 00:03:05.820 13:32:08 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:05.820 13:32:08 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.820 13:32:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.820 13:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.820 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:05.820 ************************************ 00:03:05.820 START TEST setup.sh 00:03:05.820 ************************************ 00:03:05.820 13:32:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.820 * Looking for test storage... 00:03:05.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:05.820 13:32:08 -- setup/test-setup.sh@10 -- # uname -s 00:03:05.820 13:32:08 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:05.820 13:32:08 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:05.820 13:32:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.820 13:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.820 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:05.820 ************************************ 00:03:05.820 START TEST acl 00:03:05.820 ************************************ 00:03:05.820 13:32:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:05.820 * Looking for test storage... 00:03:05.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:05.820 13:32:08 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:05.820 13:32:08 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:05.820 13:32:08 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:05.820 13:32:08 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:05.820 13:32:08 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:05.820 13:32:08 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:05.820 13:32:08 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:05.820 13:32:08 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.820 13:32:08 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:05.820 13:32:08 -- setup/acl.sh@12 -- # devs=() 00:03:05.820 13:32:08 -- setup/acl.sh@12 -- # declare -a devs 00:03:05.820 13:32:08 -- setup/acl.sh@13 -- # drivers=() 00:03:05.820 13:32:08 -- setup/acl.sh@13 -- # declare -A drivers 00:03:05.820 13:32:08 -- setup/acl.sh@51 -- # setup reset 00:03:05.820 13:32:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.820 13:32:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.715 13:32:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:07.715 13:32:10 -- setup/acl.sh@16 -- # local dev driver 00:03:07.715 13:32:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.715 13:32:10 -- setup/acl.sh@15 -- # setup output status 00:03:07.715 13:32:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.715 13:32:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:09.094 Hugepages 00:03:09.094 node hugesize free / total 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 00:03:09.094 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # continue 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.094 13:32:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:09.094 13:32:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.094 13:32:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.094 13:32:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.094 13:32:11 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.094 13:32:11 -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.094 13:32:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.094 13:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.094 13:32:11 -- common/autotest_common.sh@10 -- # set +x 00:03:09.353 ************************************ 00:03:09.353 START TEST denied 00:03:09.353 ************************************ 00:03:09.353 13:32:11 -- common/autotest_common.sh@1111 -- # denied 00:03:09.353 13:32:11 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:03:09.353 13:32:11 -- setup/acl.sh@38 -- # setup output config 00:03:09.353 13:32:11 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:03:09.353 13:32:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.353 13:32:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:10.726 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:03:10.726 13:32:13 -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:03:10.726 13:32:13 -- setup/acl.sh@28 -- # local dev driver 00:03:10.726 13:32:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.726 13:32:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:03:10.726 13:32:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:03:10.726 13:32:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.726 13:32:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.726 13:32:13 -- setup/acl.sh@41 -- # setup reset 00:03:10.726 13:32:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.726 13:32:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.259 00:03:13.259 real 0m4.025s 00:03:13.259 user 0m1.192s 00:03:13.259 sys 0m2.008s 00:03:13.259 13:32:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:13.259 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:13.259 ************************************ 00:03:13.259 END TEST denied 00:03:13.259 ************************************ 00:03:13.259 13:32:15 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.259 13:32:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.259 13:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.259 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:13.517 ************************************ 00:03:13.517 START TEST allowed 00:03:13.517 ************************************ 00:03:13.517 13:32:16 -- common/autotest_common.sh@1111 -- # allowed 00:03:13.517 13:32:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:03:13.517 13:32:16 -- setup/acl.sh@45 -- # setup output config 00:03:13.517 13:32:16 -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:03:13.517 13:32:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.517 13:32:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:16.047 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.047 13:32:18 -- setup/acl.sh@47 -- # verify 00:03:16.047 13:32:18 -- setup/acl.sh@28 -- # local dev driver 00:03:16.047 13:32:18 -- setup/acl.sh@48 -- # setup reset 00:03:16.047 13:32:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.047 13:32:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.946 00:03:17.946 real 0m4.602s 00:03:17.946 user 0m1.385s 00:03:17.946 sys 0m2.098s 00:03:17.947 13:32:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.947 13:32:20 -- common/autotest_common.sh@10 -- # set +x 00:03:17.947 ************************************ 00:03:17.947 END TEST allowed 00:03:17.947 ************************************ 00:03:17.947 00:03:17.947 real 0m12.231s 00:03:17.947 user 0m3.936s 00:03:17.947 sys 0m6.413s 00:03:17.947 13:32:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.947 13:32:20 -- common/autotest_common.sh@10 -- # set +x 00:03:17.947 ************************************ 00:03:17.947 END TEST acl 00:03:17.947 ************************************ 00:03:17.947 13:32:20 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.947 13:32:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.947 13:32:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.947 13:32:20 -- common/autotest_common.sh@10 -- # set +x 00:03:18.206 ************************************ 00:03:18.206 START TEST hugepages 00:03:18.206 ************************************ 00:03:18.206 13:32:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.206 * Looking for test storage... 00:03:18.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:18.206 13:32:20 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.206 13:32:20 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.206 13:32:20 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.206 13:32:20 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.206 13:32:20 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.206 13:32:20 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.206 13:32:20 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.206 13:32:20 -- setup/common.sh@18 -- # local node= 00:03:18.206 13:32:20 -- setup/common.sh@19 -- # local var val 00:03:18.206 13:32:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.206 13:32:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.206 13:32:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.206 13:32:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.206 13:32:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.206 13:32:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 38349000 kB' 'MemAvailable: 43395604 kB' 'Buffers: 2696 kB' 'Cached: 15524620 kB' 'SwapCached: 0 kB' 'Active: 11433328 kB' 'Inactive: 4569992 kB' 'Active(anon): 10777660 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478820 kB' 'Mapped: 176772 kB' 'Shmem: 10301656 kB' 'KReclaimable: 465380 kB' 'Slab: 840640 kB' 'SReclaimable: 465380 kB' 'SUnreclaim: 375260 kB' 'KernelStack: 13216 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562244 kB' 'Committed_AS: 11945680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198348 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.206 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.206 13:32:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # continue 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 13:32:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 13:32:20 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.207 13:32:20 -- setup/common.sh@33 -- # echo 2048 00:03:18.207 13:32:20 -- setup/common.sh@33 -- # return 0 00:03:18.207 13:32:20 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.207 13:32:20 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.207 13:32:20 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.207 13:32:20 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.207 13:32:20 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.207 13:32:20 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.207 13:32:20 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.207 13:32:20 -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.207 13:32:20 -- setup/hugepages.sh@27 -- # local node 00:03:18.207 13:32:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.207 13:32:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.207 13:32:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.207 13:32:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.207 13:32:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.207 13:32:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.207 13:32:20 -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.207 13:32:20 -- setup/hugepages.sh@37 -- # local node hp 00:03:18.207 13:32:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.207 13:32:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.207 13:32:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.207 13:32:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.207 13:32:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.207 13:32:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.207 13:32:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.207 13:32:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.207 13:32:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.207 13:32:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.208 13:32:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.208 13:32:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.208 13:32:20 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.208 13:32:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.208 13:32:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.208 13:32:20 -- common/autotest_common.sh@10 -- # set +x 00:03:18.465 ************************************ 00:03:18.465 START TEST default_setup 00:03:18.465 ************************************ 00:03:18.465 13:32:21 -- common/autotest_common.sh@1111 -- # default_setup 00:03:18.465 13:32:21 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.465 13:32:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.465 13:32:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.465 13:32:21 -- setup/hugepages.sh@51 -- # shift 00:03:18.465 13:32:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.465 13:32:21 -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.465 13:32:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.465 13:32:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.465 13:32:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.465 13:32:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.465 13:32:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.465 13:32:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.465 13:32:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.465 13:32:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.465 13:32:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.465 13:32:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.465 13:32:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.465 13:32:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.465 13:32:21 -- setup/hugepages.sh@73 -- # return 0 00:03:18.465 13:32:21 -- setup/hugepages.sh@137 -- # setup output 00:03:18.465 13:32:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.465 13:32:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:19.836 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.836 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.836 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.769 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.031 13:32:23 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:21.031 13:32:23 -- setup/hugepages.sh@89 -- # local node 00:03:21.031 13:32:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.031 13:32:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.031 13:32:23 -- setup/hugepages.sh@92 -- # local surp 00:03:21.031 13:32:23 -- setup/hugepages.sh@93 -- # local resv 00:03:21.031 13:32:23 -- setup/hugepages.sh@94 -- # local anon 00:03:21.031 13:32:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.031 13:32:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.031 13:32:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.031 13:32:23 -- setup/common.sh@18 -- # local node= 00:03:21.031 13:32:23 -- setup/common.sh@19 -- # local var val 00:03:21.031 13:32:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.031 13:32:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.031 13:32:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.031 13:32:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.031 13:32:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.031 13:32:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.031 13:32:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40440776 kB' 'MemAvailable: 45487316 kB' 'Buffers: 2696 kB' 'Cached: 15524720 kB' 'SwapCached: 0 kB' 'Active: 11450940 kB' 'Inactive: 4569992 kB' 'Active(anon): 10795272 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496820 kB' 'Mapped: 176796 kB' 'Shmem: 10301756 kB' 'KReclaimable: 465316 kB' 'Slab: 840124 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374808 kB' 'KernelStack: 13040 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.031 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.031 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.032 13:32:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.032 13:32:23 -- setup/common.sh@33 -- # echo 0 00:03:21.032 13:32:23 -- setup/common.sh@33 -- # return 0 00:03:21.032 13:32:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.032 13:32:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.032 13:32:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.032 13:32:23 -- setup/common.sh@18 -- # local node= 00:03:21.032 13:32:23 -- setup/common.sh@19 -- # local var val 00:03:21.032 13:32:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.032 13:32:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.032 13:32:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.032 13:32:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.032 13:32:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.032 13:32:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.032 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40445612 kB' 'MemAvailable: 45492152 kB' 'Buffers: 2696 kB' 'Cached: 15524720 kB' 'SwapCached: 0 kB' 'Active: 11450692 kB' 'Inactive: 4569992 kB' 'Active(anon): 10795024 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496556 kB' 'Mapped: 176736 kB' 'Shmem: 10301756 kB' 'KReclaimable: 465316 kB' 'Slab: 840228 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374912 kB' 'KernelStack: 12992 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.033 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.033 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.034 13:32:23 -- setup/common.sh@33 -- # echo 0 00:03:21.034 13:32:23 -- setup/common.sh@33 -- # return 0 00:03:21.034 13:32:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.034 13:32:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.034 13:32:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.034 13:32:23 -- setup/common.sh@18 -- # local node= 00:03:21.034 13:32:23 -- setup/common.sh@19 -- # local var val 00:03:21.034 13:32:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.034 13:32:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.034 13:32:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.034 13:32:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.034 13:32:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.034 13:32:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40445684 kB' 'MemAvailable: 45492224 kB' 'Buffers: 2696 kB' 'Cached: 15524732 kB' 'SwapCached: 0 kB' 'Active: 11450120 kB' 'Inactive: 4569992 kB' 'Active(anon): 10794452 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495952 kB' 'Mapped: 176760 kB' 'Shmem: 10301768 kB' 'KReclaimable: 465316 kB' 'Slab: 840308 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374992 kB' 'KernelStack: 13104 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.034 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.034 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.035 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.035 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.035 13:32:23 -- setup/common.sh@33 -- # echo 0 00:03:21.035 13:32:23 -- setup/common.sh@33 -- # return 0 00:03:21.036 13:32:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.036 13:32:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.036 nr_hugepages=1024 00:03:21.036 13:32:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.036 resv_hugepages=0 00:03:21.036 13:32:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.036 surplus_hugepages=0 00:03:21.036 13:32:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.036 anon_hugepages=0 00:03:21.036 13:32:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.036 13:32:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.036 13:32:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.036 13:32:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.036 13:32:23 -- setup/common.sh@18 -- # local node= 00:03:21.036 13:32:23 -- setup/common.sh@19 -- # local var val 00:03:21.036 13:32:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.036 13:32:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.036 13:32:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.036 13:32:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.036 13:32:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.036 13:32:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40445684 kB' 'MemAvailable: 45492224 kB' 'Buffers: 2696 kB' 'Cached: 15524748 kB' 'SwapCached: 0 kB' 'Active: 11450088 kB' 'Inactive: 4569992 kB' 'Active(anon): 10794420 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495916 kB' 'Mapped: 176760 kB' 'Shmem: 10301784 kB' 'KReclaimable: 465316 kB' 'Slab: 840308 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374992 kB' 'KernelStack: 13088 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.036 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.036 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.037 13:32:23 -- setup/common.sh@33 -- # echo 1024 00:03:21.037 13:32:23 -- setup/common.sh@33 -- # return 0 00:03:21.037 13:32:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.037 13:32:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.037 13:32:23 -- setup/hugepages.sh@27 -- # local node 00:03:21.037 13:32:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.037 13:32:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.037 13:32:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.037 13:32:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.037 13:32:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.037 13:32:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.037 13:32:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.037 13:32:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.037 13:32:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.037 13:32:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.037 13:32:23 -- setup/common.sh@18 -- # local node=0 00:03:21.037 13:32:23 -- setup/common.sh@19 -- # local var val 00:03:21.037 13:32:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.037 13:32:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.037 13:32:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.037 13:32:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.037 13:32:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.037 13:32:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 18448164 kB' 'MemUsed: 14381600 kB' 'SwapCached: 0 kB' 'Active: 7406100 kB' 'Inactive: 4126124 kB' 'Active(anon): 6891724 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11225996 kB' 'Mapped: 168744 kB' 'AnonPages: 309372 kB' 'Shmem: 6585496 kB' 'KernelStack: 7016 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391360 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.037 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.037 13:32:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # continue 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.038 13:32:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.038 13:32:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.038 13:32:23 -- setup/common.sh@33 -- # echo 0 00:03:21.038 13:32:23 -- setup/common.sh@33 -- # return 0 00:03:21.038 13:32:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.038 13:32:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.038 13:32:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.038 13:32:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.038 13:32:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.038 node0=1024 expecting 1024 00:03:21.038 13:32:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.038 00:03:21.038 real 0m2.786s 00:03:21.038 user 0m0.797s 00:03:21.038 sys 0m1.001s 00:03:21.038 13:32:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.038 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:21.038 ************************************ 00:03:21.038 END TEST default_setup 00:03:21.039 ************************************ 00:03:21.297 13:32:23 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:21.297 13:32:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.297 13:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.297 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:21.297 ************************************ 00:03:21.297 START TEST per_node_1G_alloc 00:03:21.297 ************************************ 00:03:21.297 13:32:23 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:21.297 13:32:23 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:21.297 13:32:23 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:21.297 13:32:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.297 13:32:23 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:21.297 13:32:23 -- setup/hugepages.sh@51 -- # shift 00:03:21.297 13:32:23 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:21.297 13:32:23 -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.297 13:32:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.297 13:32:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.297 13:32:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:21.297 13:32:23 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:21.297 13:32:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.297 13:32:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.297 13:32:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.297 13:32:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.297 13:32:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.297 13:32:23 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:21.297 13:32:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.297 13:32:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.297 13:32:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.297 13:32:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.297 13:32:23 -- setup/hugepages.sh@73 -- # return 0 00:03:21.297 13:32:23 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:21.297 13:32:23 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:21.297 13:32:23 -- setup/hugepages.sh@146 -- # setup output 00:03:21.297 13:32:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.297 13:32:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:22.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.671 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.671 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.671 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.671 13:32:25 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.671 13:32:25 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.671 13:32:25 -- setup/hugepages.sh@89 -- # local node 00:03:22.671 13:32:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.671 13:32:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.671 13:32:25 -- setup/hugepages.sh@92 -- # local surp 00:03:22.671 13:32:25 -- setup/hugepages.sh@93 -- # local resv 00:03:22.671 13:32:25 -- setup/hugepages.sh@94 -- # local anon 00:03:22.671 13:32:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.671 13:32:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.671 13:32:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.671 13:32:25 -- setup/common.sh@18 -- # local node= 00:03:22.671 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.671 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.671 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.671 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.671 13:32:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.671 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.671 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40427236 kB' 'MemAvailable: 45473776 kB' 'Buffers: 2696 kB' 'Cached: 15524804 kB' 'SwapCached: 0 kB' 'Active: 11450980 kB' 'Inactive: 4569992 kB' 'Active(anon): 10795312 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496744 kB' 'Mapped: 176808 kB' 'Shmem: 10301840 kB' 'KReclaimable: 465316 kB' 'Slab: 840380 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 375064 kB' 'KernelStack: 13136 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.671 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.671 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.672 13:32:25 -- setup/common.sh@33 -- # echo 0 00:03:22.672 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.672 13:32:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.672 13:32:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.672 13:32:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.672 13:32:25 -- setup/common.sh@18 -- # local node= 00:03:22.672 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.672 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.672 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.672 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.672 13:32:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.672 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.672 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40431172 kB' 'MemAvailable: 45477712 kB' 'Buffers: 2696 kB' 'Cached: 15524808 kB' 'SwapCached: 0 kB' 'Active: 11450776 kB' 'Inactive: 4569992 kB' 'Active(anon): 10795108 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496624 kB' 'Mapped: 176820 kB' 'Shmem: 10301844 kB' 'KReclaimable: 465316 kB' 'Slab: 840368 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 375052 kB' 'KernelStack: 13104 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.672 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.672 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.673 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.673 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.673 13:32:25 -- setup/common.sh@33 -- # echo 0 00:03:22.673 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.673 13:32:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.935 13:32:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.935 13:32:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.935 13:32:25 -- setup/common.sh@18 -- # local node= 00:03:22.935 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.935 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.935 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.935 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.935 13:32:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.935 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.935 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.935 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40433220 kB' 'MemAvailable: 45479760 kB' 'Buffers: 2696 kB' 'Cached: 15524824 kB' 'SwapCached: 0 kB' 'Active: 11450772 kB' 'Inactive: 4569992 kB' 'Active(anon): 10795104 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496600 kB' 'Mapped: 176808 kB' 'Shmem: 10301860 kB' 'KReclaimable: 465316 kB' 'Slab: 840352 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 375036 kB' 'KernelStack: 13072 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.935 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.935 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.936 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.936 13:32:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.937 13:32:25 -- setup/common.sh@33 -- # echo 0 00:03:22.937 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.937 13:32:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.937 13:32:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.937 nr_hugepages=1024 00:03:22.937 13:32:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.937 resv_hugepages=0 00:03:22.937 13:32:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.937 surplus_hugepages=0 00:03:22.937 13:32:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.937 anon_hugepages=0 00:03:22.937 13:32:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.937 13:32:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.937 13:32:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.937 13:32:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.937 13:32:25 -- setup/common.sh@18 -- # local node= 00:03:22.937 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.937 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.937 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.937 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.937 13:32:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.937 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.937 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40433756 kB' 'MemAvailable: 45480296 kB' 'Buffers: 2696 kB' 'Cached: 15524836 kB' 'SwapCached: 0 kB' 'Active: 11450504 kB' 'Inactive: 4569992 kB' 'Active(anon): 10794836 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496188 kB' 'Mapped: 176784 kB' 'Shmem: 10301872 kB' 'KReclaimable: 465316 kB' 'Slab: 840408 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 375092 kB' 'KernelStack: 13136 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11964900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.937 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.937 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.938 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.938 13:32:25 -- setup/common.sh@33 -- # echo 1024 00:03:22.938 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.938 13:32:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.938 13:32:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.938 13:32:25 -- setup/hugepages.sh@27 -- # local node 00:03:22.938 13:32:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.938 13:32:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.938 13:32:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.938 13:32:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.938 13:32:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.938 13:32:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.938 13:32:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.938 13:32:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.938 13:32:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.938 13:32:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.938 13:32:25 -- setup/common.sh@18 -- # local node=0 00:03:22.938 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.938 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.938 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.938 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.938 13:32:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.938 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.938 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.938 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 19491776 kB' 'MemUsed: 13337988 kB' 'SwapCached: 0 kB' 'Active: 7404976 kB' 'Inactive: 4126124 kB' 'Active(anon): 6890600 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226060 kB' 'Mapped: 168332 kB' 'AnonPages: 308212 kB' 'Shmem: 6585560 kB' 'KernelStack: 7032 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391488 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.939 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.939 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@33 -- # echo 0 00:03:22.940 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.940 13:32:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.940 13:32:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.940 13:32:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.940 13:32:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.940 13:32:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.940 13:32:25 -- setup/common.sh@18 -- # local node=1 00:03:22.940 13:32:25 -- setup/common.sh@19 -- # local var val 00:03:22.940 13:32:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.940 13:32:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.940 13:32:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.940 13:32:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.940 13:32:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.940 13:32:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20941728 kB' 'MemUsed: 6770096 kB' 'SwapCached: 0 kB' 'Active: 4045120 kB' 'Inactive: 443868 kB' 'Active(anon): 3903828 kB' 'Inactive(anon): 0 kB' 'Active(file): 141292 kB' 'Inactive(file): 443868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4301484 kB' 'Mapped: 8452 kB' 'AnonPages: 187552 kB' 'Shmem: 3716324 kB' 'KernelStack: 6104 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 245448 kB' 'Slab: 448920 kB' 'SReclaimable: 245448 kB' 'SUnreclaim: 203472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.940 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.940 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # continue 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.941 13:32:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.941 13:32:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.941 13:32:25 -- setup/common.sh@33 -- # echo 0 00:03:22.941 13:32:25 -- setup/common.sh@33 -- # return 0 00:03:22.941 13:32:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.941 13:32:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.941 13:32:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.941 13:32:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.941 node0=512 expecting 512 00:03:22.941 13:32:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.941 13:32:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.941 13:32:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.941 13:32:25 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.941 node1=512 expecting 512 00:03:22.941 13:32:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.941 00:03:22.941 real 0m1.666s 00:03:22.941 user 0m0.701s 00:03:22.941 sys 0m0.922s 00:03:22.941 13:32:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.941 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:03:22.941 ************************************ 00:03:22.941 END TEST per_node_1G_alloc 00:03:22.941 ************************************ 00:03:22.941 13:32:25 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.941 13:32:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.941 13:32:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.941 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:03:22.941 ************************************ 00:03:22.941 START TEST even_2G_alloc 00:03:22.941 ************************************ 00:03:22.941 13:32:25 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:22.941 13:32:25 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.941 13:32:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.941 13:32:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.941 13:32:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.941 13:32:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.941 13:32:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.941 13:32:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.941 13:32:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.941 13:32:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.941 13:32:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.941 13:32:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.941 13:32:25 -- setup/hugepages.sh@83 -- # : 512 00:03:22.941 13:32:25 -- setup/hugepages.sh@84 -- # : 1 00:03:22.941 13:32:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.941 13:32:25 -- setup/hugepages.sh@83 -- # : 0 00:03:22.941 13:32:25 -- setup/hugepages.sh@84 -- # : 0 00:03:22.941 13:32:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.941 13:32:25 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.941 13:32:25 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.941 13:32:25 -- setup/hugepages.sh@153 -- # setup output 00:03:22.941 13:32:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.941 13:32:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:24.849 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.849 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.849 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.849 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.849 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.849 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.849 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.849 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.849 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.849 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.849 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.849 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.849 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.849 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.849 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.849 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.849 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.849 13:32:27 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:24.849 13:32:27 -- setup/hugepages.sh@89 -- # local node 00:03:24.849 13:32:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.849 13:32:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.849 13:32:27 -- setup/hugepages.sh@92 -- # local surp 00:03:24.849 13:32:27 -- setup/hugepages.sh@93 -- # local resv 00:03:24.849 13:32:27 -- setup/hugepages.sh@94 -- # local anon 00:03:24.849 13:32:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.849 13:32:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.849 13:32:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.849 13:32:27 -- setup/common.sh@18 -- # local node= 00:03:24.849 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.849 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.849 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.849 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.849 13:32:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.849 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.849 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40415400 kB' 'MemAvailable: 45461940 kB' 'Buffers: 2696 kB' 'Cached: 15524900 kB' 'SwapCached: 0 kB' 'Active: 11449392 kB' 'Inactive: 4569992 kB' 'Active(anon): 10793724 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495084 kB' 'Mapped: 176572 kB' 'Shmem: 10301936 kB' 'KReclaimable: 465316 kB' 'Slab: 840136 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374820 kB' 'KernelStack: 12992 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11951240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198464 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.849 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.849 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.850 13:32:27 -- setup/common.sh@33 -- # echo 0 00:03:24.850 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.850 13:32:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.850 13:32:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.850 13:32:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.850 13:32:27 -- setup/common.sh@18 -- # local node= 00:03:24.850 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.850 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.850 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.850 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.850 13:32:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.850 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.850 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.850 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40416480 kB' 'MemAvailable: 45463020 kB' 'Buffers: 2696 kB' 'Cached: 15524904 kB' 'SwapCached: 0 kB' 'Active: 11449872 kB' 'Inactive: 4569992 kB' 'Active(anon): 10794204 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495504 kB' 'Mapped: 176572 kB' 'Shmem: 10301940 kB' 'KReclaimable: 465316 kB' 'Slab: 840112 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374796 kB' 'KernelStack: 12992 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11951616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198464 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.850 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.850 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.851 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.852 13:32:27 -- setup/common.sh@33 -- # echo 0 00:03:24.852 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.852 13:32:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.852 13:32:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.852 13:32:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.852 13:32:27 -- setup/common.sh@18 -- # local node= 00:03:24.852 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.852 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.852 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.852 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.852 13:32:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.852 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.852 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40414968 kB' 'MemAvailable: 45461508 kB' 'Buffers: 2696 kB' 'Cached: 15524904 kB' 'SwapCached: 0 kB' 'Active: 11445584 kB' 'Inactive: 4569992 kB' 'Active(anon): 10789916 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491236 kB' 'Mapped: 176560 kB' 'Shmem: 10301940 kB' 'KReclaimable: 465316 kB' 'Slab: 840116 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374800 kB' 'KernelStack: 12992 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11947264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.852 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.853 13:32:27 -- setup/common.sh@33 -- # echo 0 00:03:24.853 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.853 13:32:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.853 13:32:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.853 nr_hugepages=1024 00:03:24.853 13:32:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.853 resv_hugepages=0 00:03:24.853 13:32:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.853 surplus_hugepages=0 00:03:24.853 13:32:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.853 anon_hugepages=0 00:03:24.853 13:32:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.853 13:32:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.853 13:32:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.853 13:32:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.853 13:32:27 -- setup/common.sh@18 -- # local node= 00:03:24.853 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.853 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.853 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.853 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.853 13:32:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.853 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.853 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40412304 kB' 'MemAvailable: 45458844 kB' 'Buffers: 2696 kB' 'Cached: 15524908 kB' 'SwapCached: 0 kB' 'Active: 11446912 kB' 'Inactive: 4569992 kB' 'Active(anon): 10791244 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492516 kB' 'Mapped: 176188 kB' 'Shmem: 10301944 kB' 'KReclaimable: 465316 kB' 'Slab: 840112 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374796 kB' 'KernelStack: 12944 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11949788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.854 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.855 13:32:27 -- setup/common.sh@33 -- # echo 1024 00:03:24.855 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.855 13:32:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.855 13:32:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.855 13:32:27 -- setup/hugepages.sh@27 -- # local node 00:03:24.855 13:32:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.855 13:32:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.855 13:32:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.855 13:32:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.855 13:32:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.855 13:32:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.855 13:32:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.855 13:32:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.855 13:32:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.855 13:32:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.855 13:32:27 -- setup/common.sh@18 -- # local node=0 00:03:24.855 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.855 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.855 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.855 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.855 13:32:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.855 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.855 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 19481136 kB' 'MemUsed: 13348628 kB' 'SwapCached: 0 kB' 'Active: 7402656 kB' 'Inactive: 4126124 kB' 'Active(anon): 6888280 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226156 kB' 'Mapped: 167536 kB' 'AnonPages: 305776 kB' 'Shmem: 6585656 kB' 'KernelStack: 7048 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391316 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.855 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.855 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@33 -- # echo 0 00:03:24.856 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.856 13:32:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.856 13:32:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.856 13:32:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.856 13:32:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.856 13:32:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.856 13:32:27 -- setup/common.sh@18 -- # local node=1 00:03:24.856 13:32:27 -- setup/common.sh@19 -- # local var val 00:03:24.856 13:32:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.856 13:32:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.856 13:32:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.856 13:32:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.856 13:32:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.856 13:32:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20927180 kB' 'MemUsed: 6784644 kB' 'SwapCached: 0 kB' 'Active: 4041208 kB' 'Inactive: 443868 kB' 'Active(anon): 3899916 kB' 'Inactive(anon): 0 kB' 'Active(file): 141292 kB' 'Inactive(file): 443868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4301488 kB' 'Mapped: 8452 kB' 'AnonPages: 183688 kB' 'Shmem: 3716328 kB' 'KernelStack: 5944 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 245448 kB' 'Slab: 448776 kB' 'SReclaimable: 245448 kB' 'SUnreclaim: 203328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 13:32:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # continue 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 13:32:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 13:32:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.857 13:32:27 -- setup/common.sh@33 -- # echo 0 00:03:24.857 13:32:27 -- setup/common.sh@33 -- # return 0 00:03:24.857 13:32:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.857 13:32:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.857 13:32:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.858 13:32:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.858 13:32:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.858 node0=512 expecting 512 00:03:24.858 13:32:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.858 13:32:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.858 13:32:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.858 13:32:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.858 node1=512 expecting 512 00:03:24.858 13:32:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.858 00:03:24.858 real 0m1.840s 00:03:24.858 user 0m0.750s 00:03:24.858 sys 0m1.052s 00:03:24.858 13:32:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.858 13:32:27 -- common/autotest_common.sh@10 -- # set +x 00:03:24.858 ************************************ 00:03:24.858 END TEST even_2G_alloc 00:03:24.858 ************************************ 00:03:24.858 13:32:27 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:24.858 13:32:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.858 13:32:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.858 13:32:27 -- common/autotest_common.sh@10 -- # set +x 00:03:25.144 ************************************ 00:03:25.144 START TEST odd_alloc 00:03:25.144 ************************************ 00:03:25.144 13:32:27 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:25.144 13:32:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:25.144 13:32:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:25.144 13:32:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:25.144 13:32:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.144 13:32:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.144 13:32:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.144 13:32:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:25.144 13:32:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.144 13:32:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.144 13:32:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.144 13:32:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.144 13:32:27 -- setup/hugepages.sh@83 -- # : 513 00:03:25.144 13:32:27 -- setup/hugepages.sh@84 -- # : 1 00:03:25.144 13:32:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:25.144 13:32:27 -- setup/hugepages.sh@83 -- # : 0 00:03:25.144 13:32:27 -- setup/hugepages.sh@84 -- # : 0 00:03:25.144 13:32:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.144 13:32:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:25.144 13:32:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:25.144 13:32:27 -- setup/hugepages.sh@160 -- # setup output 00:03:25.144 13:32:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.144 13:32:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:26.521 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.521 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.521 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.521 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.521 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.521 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.521 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.521 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.521 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.521 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.521 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.521 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.521 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.521 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.521 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.521 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.521 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.521 13:32:29 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.521 13:32:29 -- setup/hugepages.sh@89 -- # local node 00:03:26.521 13:32:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.521 13:32:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.521 13:32:29 -- setup/hugepages.sh@92 -- # local surp 00:03:26.521 13:32:29 -- setup/hugepages.sh@93 -- # local resv 00:03:26.521 13:32:29 -- setup/hugepages.sh@94 -- # local anon 00:03:26.521 13:32:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.521 13:32:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.521 13:32:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.521 13:32:29 -- setup/common.sh@18 -- # local node= 00:03:26.521 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.521 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.521 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.521 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.521 13:32:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.521 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.521 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40395832 kB' 'MemAvailable: 45442372 kB' 'Buffers: 2696 kB' 'Cached: 15525000 kB' 'SwapCached: 0 kB' 'Active: 11445568 kB' 'Inactive: 4569992 kB' 'Active(anon): 10789900 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491616 kB' 'Mapped: 175816 kB' 'Shmem: 10302036 kB' 'KReclaimable: 465316 kB' 'Slab: 840056 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374740 kB' 'KernelStack: 13280 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609796 kB' 'Committed_AS: 11947992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.521 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.521 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.522 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.522 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.522 13:32:29 -- setup/common.sh@33 -- # echo 0 00:03:26.522 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.784 13:32:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.784 13:32:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.784 13:32:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.784 13:32:29 -- setup/common.sh@18 -- # local node= 00:03:26.784 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.784 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.784 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.784 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.784 13:32:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.784 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.784 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40393244 kB' 'MemAvailable: 45439784 kB' 'Buffers: 2696 kB' 'Cached: 15525004 kB' 'SwapCached: 0 kB' 'Active: 11445756 kB' 'Inactive: 4569992 kB' 'Active(anon): 10790088 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491408 kB' 'Mapped: 175868 kB' 'Shmem: 10302040 kB' 'KReclaimable: 465316 kB' 'Slab: 840056 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374740 kB' 'KernelStack: 13216 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609796 kB' 'Committed_AS: 11948004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198588 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.786 13:32:29 -- setup/common.sh@33 -- # echo 0 00:03:26.786 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.786 13:32:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.786 13:32:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.786 13:32:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.786 13:32:29 -- setup/common.sh@18 -- # local node= 00:03:26.786 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.786 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.786 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.786 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.786 13:32:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.786 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.786 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40393668 kB' 'MemAvailable: 45440208 kB' 'Buffers: 2696 kB' 'Cached: 15525016 kB' 'SwapCached: 0 kB' 'Active: 11445716 kB' 'Inactive: 4569992 kB' 'Active(anon): 10790048 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491224 kB' 'Mapped: 175800 kB' 'Shmem: 10302052 kB' 'KReclaimable: 465316 kB' 'Slab: 840068 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374752 kB' 'KernelStack: 13120 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609796 kB' 'Committed_AS: 11945608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.786 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.786 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.787 13:32:29 -- setup/common.sh@33 -- # echo 0 00:03:26.787 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.787 13:32:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.787 13:32:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:26.787 nr_hugepages=1025 00:03:26.787 13:32:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.787 resv_hugepages=0 00:03:26.787 13:32:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.787 surplus_hugepages=0 00:03:26.787 13:32:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.787 anon_hugepages=0 00:03:26.787 13:32:29 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.787 13:32:29 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:26.787 13:32:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.787 13:32:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.787 13:32:29 -- setup/common.sh@18 -- # local node= 00:03:26.787 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.787 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.787 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.787 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.787 13:32:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.787 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.787 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40393896 kB' 'MemAvailable: 45440436 kB' 'Buffers: 2696 kB' 'Cached: 15525032 kB' 'SwapCached: 0 kB' 'Active: 11444688 kB' 'Inactive: 4569992 kB' 'Active(anon): 10789020 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490304 kB' 'Mapped: 175792 kB' 'Shmem: 10302068 kB' 'KReclaimable: 465316 kB' 'Slab: 840148 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374832 kB' 'KernelStack: 13024 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609796 kB' 'Committed_AS: 11945624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.787 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.787 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.788 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.788 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.789 13:32:29 -- setup/common.sh@33 -- # echo 1025 00:03:26.789 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.789 13:32:29 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.789 13:32:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.789 13:32:29 -- setup/hugepages.sh@27 -- # local node 00:03:26.789 13:32:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.789 13:32:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.789 13:32:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.789 13:32:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:26.789 13:32:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.789 13:32:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.789 13:32:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.789 13:32:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.789 13:32:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.789 13:32:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.789 13:32:29 -- setup/common.sh@18 -- # local node=0 00:03:26.789 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.789 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.789 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.789 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.789 13:32:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.789 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.789 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 19472784 kB' 'MemUsed: 13356980 kB' 'SwapCached: 0 kB' 'Active: 7402564 kB' 'Inactive: 4126124 kB' 'Active(anon): 6888188 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226204 kB' 'Mapped: 167340 kB' 'AnonPages: 305644 kB' 'Shmem: 6585704 kB' 'KernelStack: 7016 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391424 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.789 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.789 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@33 -- # echo 0 00:03:26.790 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.790 13:32:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.790 13:32:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.790 13:32:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.790 13:32:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.790 13:32:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.790 13:32:29 -- setup/common.sh@18 -- # local node=1 00:03:26.790 13:32:29 -- setup/common.sh@19 -- # local var val 00:03:26.790 13:32:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.790 13:32:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.790 13:32:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.790 13:32:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.790 13:32:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.790 13:32:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20922276 kB' 'MemUsed: 6789548 kB' 'SwapCached: 0 kB' 'Active: 4042172 kB' 'Inactive: 443868 kB' 'Active(anon): 3900880 kB' 'Inactive(anon): 0 kB' 'Active(file): 141292 kB' 'Inactive(file): 443868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4301552 kB' 'Mapped: 8452 kB' 'AnonPages: 184672 kB' 'Shmem: 3716392 kB' 'KernelStack: 6008 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 245448 kB' 'Slab: 448724 kB' 'SReclaimable: 245448 kB' 'SUnreclaim: 203276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.790 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.790 13:32:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # continue 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.791 13:32:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.791 13:32:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.791 13:32:29 -- setup/common.sh@33 -- # echo 0 00:03:26.791 13:32:29 -- setup/common.sh@33 -- # return 0 00:03:26.791 13:32:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.791 13:32:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.791 13:32:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.791 13:32:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.791 13:32:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:26.791 node0=512 expecting 513 00:03:26.791 13:32:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.791 13:32:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.791 13:32:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.791 13:32:29 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:26.791 node1=513 expecting 512 00:03:26.791 13:32:29 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:26.791 00:03:26.791 real 0m1.836s 00:03:26.791 user 0m0.785s 00:03:26.791 sys 0m1.009s 00:03:26.791 13:32:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.791 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.791 ************************************ 00:03:26.791 END TEST odd_alloc 00:03:26.791 ************************************ 00:03:26.791 13:32:29 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:26.791 13:32:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.791 13:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.791 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.050 ************************************ 00:03:27.050 START TEST custom_alloc 00:03:27.050 ************************************ 00:03:27.050 13:32:29 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:27.050 13:32:29 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:27.050 13:32:29 -- setup/hugepages.sh@169 -- # local node 00:03:27.050 13:32:29 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:27.050 13:32:29 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:27.050 13:32:29 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:27.050 13:32:29 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:27.050 13:32:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.050 13:32:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.050 13:32:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.050 13:32:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.050 13:32:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.050 13:32:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.050 13:32:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.050 13:32:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.050 13:32:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.050 13:32:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.050 13:32:29 -- setup/hugepages.sh@83 -- # : 256 00:03:27.050 13:32:29 -- setup/hugepages.sh@84 -- # : 1 00:03:27.050 13:32:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.050 13:32:29 -- setup/hugepages.sh@83 -- # : 0 00:03:27.050 13:32:29 -- setup/hugepages.sh@84 -- # : 0 00:03:27.050 13:32:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:27.050 13:32:29 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:27.050 13:32:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.050 13:32:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.050 13:32:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.051 13:32:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.051 13:32:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.051 13:32:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.051 13:32:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.051 13:32:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.051 13:32:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.051 13:32:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.051 13:32:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.051 13:32:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.051 13:32:29 -- setup/hugepages.sh@78 -- # return 0 00:03:27.051 13:32:29 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:27.051 13:32:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.051 13:32:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.051 13:32:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.051 13:32:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.051 13:32:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:27.051 13:32:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.051 13:32:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.051 13:32:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.051 13:32:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.051 13:32:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.051 13:32:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.051 13:32:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:27.051 13:32:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.051 13:32:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.051 13:32:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.051 13:32:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:27.051 13:32:29 -- setup/hugepages.sh@78 -- # return 0 00:03:27.051 13:32:29 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:27.051 13:32:29 -- setup/hugepages.sh@187 -- # setup output 00:03:27.051 13:32:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.051 13:32:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:28.425 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.425 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.425 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.425 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.425 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.425 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.425 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.425 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.425 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.425 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.425 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.425 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.425 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.425 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.425 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.425 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.425 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.688 13:32:31 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.688 13:32:31 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.688 13:32:31 -- setup/hugepages.sh@89 -- # local node 00:03:28.688 13:32:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.688 13:32:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.688 13:32:31 -- setup/hugepages.sh@92 -- # local surp 00:03:28.688 13:32:31 -- setup/hugepages.sh@93 -- # local resv 00:03:28.688 13:32:31 -- setup/hugepages.sh@94 -- # local anon 00:03:28.688 13:32:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.688 13:32:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.688 13:32:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.688 13:32:31 -- setup/common.sh@18 -- # local node= 00:03:28.688 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.688 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.688 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.688 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.688 13:32:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.688 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.688 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 39337716 kB' 'MemAvailable: 44384256 kB' 'Buffers: 2696 kB' 'Cached: 15525100 kB' 'SwapCached: 0 kB' 'Active: 11443072 kB' 'Inactive: 4569992 kB' 'Active(anon): 10787404 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488492 kB' 'Mapped: 175884 kB' 'Shmem: 10302136 kB' 'KReclaimable: 465316 kB' 'Slab: 840300 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374984 kB' 'KernelStack: 13008 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086532 kB' 'Committed_AS: 11945636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.688 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.688 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.689 13:32:31 -- setup/common.sh@33 -- # echo 0 00:03:28.689 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.689 13:32:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.689 13:32:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.689 13:32:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.689 13:32:31 -- setup/common.sh@18 -- # local node= 00:03:28.689 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.689 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.689 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.689 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.689 13:32:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.689 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.689 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 39338200 kB' 'MemAvailable: 44384740 kB' 'Buffers: 2696 kB' 'Cached: 15525104 kB' 'SwapCached: 0 kB' 'Active: 11443764 kB' 'Inactive: 4569992 kB' 'Active(anon): 10788096 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489220 kB' 'Mapped: 175880 kB' 'Shmem: 10302140 kB' 'KReclaimable: 465316 kB' 'Slab: 840288 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374972 kB' 'KernelStack: 13040 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086532 kB' 'Committed_AS: 11945648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.689 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.689 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.690 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.690 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.691 13:32:31 -- setup/common.sh@33 -- # echo 0 00:03:28.691 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.691 13:32:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.691 13:32:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.691 13:32:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.691 13:32:31 -- setup/common.sh@18 -- # local node= 00:03:28.691 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.691 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.691 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.691 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.691 13:32:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.691 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.691 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 39338452 kB' 'MemAvailable: 44384992 kB' 'Buffers: 2696 kB' 'Cached: 15525116 kB' 'SwapCached: 0 kB' 'Active: 11443652 kB' 'Inactive: 4569992 kB' 'Active(anon): 10787984 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489040 kB' 'Mapped: 175804 kB' 'Shmem: 10302152 kB' 'KReclaimable: 465316 kB' 'Slab: 840312 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374996 kB' 'KernelStack: 13040 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086532 kB' 'Committed_AS: 11945660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.691 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.691 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.692 13:32:31 -- setup/common.sh@33 -- # echo 0 00:03:28.692 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.692 13:32:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.692 13:32:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.692 nr_hugepages=1536 00:03:28.692 13:32:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.692 resv_hugepages=0 00:03:28.692 13:32:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.692 surplus_hugepages=0 00:03:28.692 13:32:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.692 anon_hugepages=0 00:03:28.692 13:32:31 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.692 13:32:31 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.692 13:32:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.692 13:32:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.692 13:32:31 -- setup/common.sh@18 -- # local node= 00:03:28.692 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.692 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.692 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.692 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.692 13:32:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.692 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.692 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 39338704 kB' 'MemAvailable: 44385244 kB' 'Buffers: 2696 kB' 'Cached: 15525132 kB' 'SwapCached: 0 kB' 'Active: 11443628 kB' 'Inactive: 4569992 kB' 'Active(anon): 10787960 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489008 kB' 'Mapped: 175804 kB' 'Shmem: 10302168 kB' 'KReclaimable: 465316 kB' 'Slab: 840304 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374988 kB' 'KernelStack: 13024 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086532 kB' 'Committed_AS: 11945676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.692 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.692 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.693 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.693 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.694 13:32:31 -- setup/common.sh@33 -- # echo 1536 00:03:28.694 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.694 13:32:31 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.694 13:32:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.694 13:32:31 -- setup/hugepages.sh@27 -- # local node 00:03:28.694 13:32:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.694 13:32:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.694 13:32:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.694 13:32:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.694 13:32:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.694 13:32:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.694 13:32:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.694 13:32:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.694 13:32:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.694 13:32:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.694 13:32:31 -- setup/common.sh@18 -- # local node=0 00:03:28.694 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.694 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.694 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.694 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.694 13:32:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.694 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.694 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 19472188 kB' 'MemUsed: 13357576 kB' 'SwapCached: 0 kB' 'Active: 7401460 kB' 'Inactive: 4126124 kB' 'Active(anon): 6887084 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226216 kB' 'Mapped: 167352 kB' 'AnonPages: 304468 kB' 'Shmem: 6585716 kB' 'KernelStack: 7048 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391532 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.694 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.694 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@33 -- # echo 0 00:03:28.695 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.695 13:32:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.695 13:32:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.695 13:32:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.695 13:32:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.695 13:32:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.695 13:32:31 -- setup/common.sh@18 -- # local node=1 00:03:28.695 13:32:31 -- setup/common.sh@19 -- # local var val 00:03:28.695 13:32:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.695 13:32:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.695 13:32:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.695 13:32:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.695 13:32:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.695 13:32:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 19866952 kB' 'MemUsed: 7844872 kB' 'SwapCached: 0 kB' 'Active: 4042336 kB' 'Inactive: 443868 kB' 'Active(anon): 3901044 kB' 'Inactive(anon): 0 kB' 'Active(file): 141292 kB' 'Inactive(file): 443868 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4301628 kB' 'Mapped: 8452 kB' 'AnonPages: 184752 kB' 'Shmem: 3716468 kB' 'KernelStack: 6008 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 245448 kB' 'Slab: 448772 kB' 'SReclaimable: 245448 kB' 'SUnreclaim: 203324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.695 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.695 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # continue 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.696 13:32:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.696 13:32:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.696 13:32:31 -- setup/common.sh@33 -- # echo 0 00:03:28.696 13:32:31 -- setup/common.sh@33 -- # return 0 00:03:28.696 13:32:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.696 13:32:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.696 13:32:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.696 13:32:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.696 13:32:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.696 node0=512 expecting 512 00:03:28.696 13:32:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.696 13:32:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.696 13:32:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.696 13:32:31 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.696 node1=1024 expecting 1024 00:03:28.696 13:32:31 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.696 00:03:28.696 real 0m1.735s 00:03:28.696 user 0m0.689s 00:03:28.696 sys 0m1.001s 00:03:28.696 13:32:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.696 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:28.696 ************************************ 00:03:28.696 END TEST custom_alloc 00:03:28.696 ************************************ 00:03:28.696 13:32:31 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.696 13:32:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.696 13:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.696 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:28.955 ************************************ 00:03:28.955 START TEST no_shrink_alloc 00:03:28.955 ************************************ 00:03:28.955 13:32:31 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:28.955 13:32:31 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.955 13:32:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.955 13:32:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.955 13:32:31 -- setup/hugepages.sh@51 -- # shift 00:03:28.955 13:32:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.955 13:32:31 -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.955 13:32:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.955 13:32:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.955 13:32:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.955 13:32:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.955 13:32:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.955 13:32:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.955 13:32:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.955 13:32:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.955 13:32:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.955 13:32:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.955 13:32:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.955 13:32:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.955 13:32:31 -- setup/hugepages.sh@73 -- # return 0 00:03:28.955 13:32:31 -- setup/hugepages.sh@198 -- # setup output 00:03:28.955 13:32:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.955 13:32:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:30.330 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.330 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.330 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.330 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.330 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.330 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.330 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.330 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.330 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.330 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.330 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.330 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.330 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.330 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.330 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.330 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.330 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.590 13:32:33 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.590 13:32:33 -- setup/hugepages.sh@89 -- # local node 00:03:30.590 13:32:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.590 13:32:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.590 13:32:33 -- setup/hugepages.sh@92 -- # local surp 00:03:30.590 13:32:33 -- setup/hugepages.sh@93 -- # local resv 00:03:30.590 13:32:33 -- setup/hugepages.sh@94 -- # local anon 00:03:30.590 13:32:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.590 13:32:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.590 13:32:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.590 13:32:33 -- setup/common.sh@18 -- # local node= 00:03:30.590 13:32:33 -- setup/common.sh@19 -- # local var val 00:03:30.590 13:32:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.590 13:32:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.590 13:32:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.590 13:32:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.590 13:32:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.590 13:32:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 13:32:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40370068 kB' 'MemAvailable: 45416608 kB' 'Buffers: 2696 kB' 'Cached: 15533392 kB' 'SwapCached: 0 kB' 'Active: 11452756 kB' 'Inactive: 4569992 kB' 'Active(anon): 10797088 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489440 kB' 'Mapped: 175880 kB' 'Shmem: 10310428 kB' 'KReclaimable: 465316 kB' 'Slab: 840068 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374752 kB' 'KernelStack: 13040 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11954316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198540 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.590 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.590 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.591 13:32:33 -- setup/common.sh@33 -- # echo 0 00:03:30.591 13:32:33 -- setup/common.sh@33 -- # return 0 00:03:30.591 13:32:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.591 13:32:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.591 13:32:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.591 13:32:33 -- setup/common.sh@18 -- # local node= 00:03:30.591 13:32:33 -- setup/common.sh@19 -- # local var val 00:03:30.591 13:32:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.591 13:32:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.591 13:32:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.591 13:32:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.591 13:32:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.591 13:32:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40370376 kB' 'MemAvailable: 45416916 kB' 'Buffers: 2696 kB' 'Cached: 15533392 kB' 'SwapCached: 0 kB' 'Active: 11452192 kB' 'Inactive: 4569992 kB' 'Active(anon): 10796524 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489284 kB' 'Mapped: 175840 kB' 'Shmem: 10310428 kB' 'KReclaimable: 465316 kB' 'Slab: 840068 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374752 kB' 'KernelStack: 13088 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11954328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.591 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.591 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.592 13:32:33 -- setup/common.sh@33 -- # echo 0 00:03:30.592 13:32:33 -- setup/common.sh@33 -- # return 0 00:03:30.592 13:32:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.592 13:32:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.592 13:32:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.592 13:32:33 -- setup/common.sh@18 -- # local node= 00:03:30.592 13:32:33 -- setup/common.sh@19 -- # local var val 00:03:30.592 13:32:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.592 13:32:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.592 13:32:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.592 13:32:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.592 13:32:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.592 13:32:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40370376 kB' 'MemAvailable: 45416916 kB' 'Buffers: 2696 kB' 'Cached: 15533404 kB' 'SwapCached: 0 kB' 'Active: 11452144 kB' 'Inactive: 4569992 kB' 'Active(anon): 10796476 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489220 kB' 'Mapped: 175840 kB' 'Shmem: 10310440 kB' 'KReclaimable: 465316 kB' 'Slab: 840076 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374760 kB' 'KernelStack: 13072 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11954340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198524 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.592 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.592 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.592 13:32:33 -- setup/common.sh@33 -- # echo 0 00:03:30.592 13:32:33 -- setup/common.sh@33 -- # return 0 00:03:30.592 13:32:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.592 13:32:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.592 nr_hugepages=1024 00:03:30.592 13:32:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.592 resv_hugepages=0 00:03:30.592 13:32:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.592 surplus_hugepages=0 00:03:30.592 13:32:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.592 anon_hugepages=0 00:03:30.592 13:32:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.592 13:32:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.592 13:32:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.592 13:32:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.592 13:32:33 -- setup/common.sh@18 -- # local node= 00:03:30.592 13:32:33 -- setup/common.sh@19 -- # local var val 00:03:30.592 13:32:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.592 13:32:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.593 13:32:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.593 13:32:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.593 13:32:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.593 13:32:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40370376 kB' 'MemAvailable: 45416916 kB' 'Buffers: 2696 kB' 'Cached: 15533420 kB' 'SwapCached: 0 kB' 'Active: 11453428 kB' 'Inactive: 4569992 kB' 'Active(anon): 10797760 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490492 kB' 'Mapped: 176276 kB' 'Shmem: 10310456 kB' 'KReclaimable: 465316 kB' 'Slab: 840076 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374760 kB' 'KernelStack: 13056 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11956372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198556 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.593 13:32:33 -- setup/common.sh@33 -- # echo 1024 00:03:30.593 13:32:33 -- setup/common.sh@33 -- # return 0 00:03:30.593 13:32:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.593 13:32:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.593 13:32:33 -- setup/hugepages.sh@27 -- # local node 00:03:30.593 13:32:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.593 13:32:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.593 13:32:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.593 13:32:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.593 13:32:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.593 13:32:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.593 13:32:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.593 13:32:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.593 13:32:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.593 13:32:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.593 13:32:33 -- setup/common.sh@18 -- # local node=0 00:03:30.593 13:32:33 -- setup/common.sh@19 -- # local var val 00:03:30.593 13:32:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.593 13:32:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.593 13:32:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.593 13:32:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.593 13:32:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.593 13:32:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 18416596 kB' 'MemUsed: 14413168 kB' 'SwapCached: 0 kB' 'Active: 7401576 kB' 'Inactive: 4126124 kB' 'Active(anon): 6887200 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226224 kB' 'Mapped: 167808 kB' 'AnonPages: 304572 kB' 'Shmem: 6585724 kB' 'KernelStack: 7064 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391392 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.593 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.593 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # continue 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.594 13:32:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.594 13:32:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.594 13:32:33 -- setup/common.sh@33 -- # echo 0 00:03:30.594 13:32:33 -- setup/common.sh@33 -- # return 0 00:03:30.594 13:32:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.594 13:32:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.594 13:32:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.594 13:32:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.594 13:32:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.594 node0=1024 expecting 1024 00:03:30.594 13:32:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.594 13:32:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.594 13:32:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.594 13:32:33 -- setup/hugepages.sh@202 -- # setup output 00:03:30.594 13:32:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.594 13:32:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:32.498 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.498 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.498 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.498 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.498 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.498 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.498 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.498 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.498 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.498 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.498 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.498 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.498 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.498 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.498 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.498 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.498 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.498 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:32.498 13:32:34 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:32.498 13:32:34 -- setup/hugepages.sh@89 -- # local node 00:03:32.498 13:32:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.498 13:32:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.498 13:32:34 -- setup/hugepages.sh@92 -- # local surp 00:03:32.498 13:32:34 -- setup/hugepages.sh@93 -- # local resv 00:03:32.498 13:32:34 -- setup/hugepages.sh@94 -- # local anon 00:03:32.498 13:32:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.498 13:32:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.498 13:32:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.498 13:32:34 -- setup/common.sh@18 -- # local node= 00:03:32.498 13:32:34 -- setup/common.sh@19 -- # local var val 00:03:32.498 13:32:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.498 13:32:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.498 13:32:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.498 13:32:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.498 13:32:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.498 13:32:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.498 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 13:32:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40358364 kB' 'MemAvailable: 45404904 kB' 'Buffers: 2696 kB' 'Cached: 15533472 kB' 'SwapCached: 0 kB' 'Active: 11453732 kB' 'Inactive: 4569992 kB' 'Active(anon): 10798064 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490796 kB' 'Mapped: 176264 kB' 'Shmem: 10310508 kB' 'KReclaimable: 465316 kB' 'Slab: 840204 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374888 kB' 'KernelStack: 13248 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11978920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198652 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:34 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.499 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.499 13:32:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.499 13:32:35 -- setup/common.sh@33 -- # echo 0 00:03:32.500 13:32:35 -- setup/common.sh@33 -- # return 0 00:03:32.500 13:32:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.500 13:32:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.500 13:32:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.500 13:32:35 -- setup/common.sh@18 -- # local node= 00:03:32.500 13:32:35 -- setup/common.sh@19 -- # local var val 00:03:32.500 13:32:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.500 13:32:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.500 13:32:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.500 13:32:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.500 13:32:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.500 13:32:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40360236 kB' 'MemAvailable: 45406776 kB' 'Buffers: 2696 kB' 'Cached: 15533472 kB' 'SwapCached: 0 kB' 'Active: 11455060 kB' 'Inactive: 4569992 kB' 'Active(anon): 10799392 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491668 kB' 'Mapped: 175708 kB' 'Shmem: 10310508 kB' 'KReclaimable: 465316 kB' 'Slab: 840196 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374880 kB' 'KernelStack: 13552 kB' 'PageTables: 9392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11956624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.500 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.500 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.501 13:32:35 -- setup/common.sh@33 -- # echo 0 00:03:32.501 13:32:35 -- setup/common.sh@33 -- # return 0 00:03:32.501 13:32:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.501 13:32:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.501 13:32:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.501 13:32:35 -- setup/common.sh@18 -- # local node= 00:03:32.501 13:32:35 -- setup/common.sh@19 -- # local var val 00:03:32.501 13:32:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.501 13:32:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.501 13:32:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.501 13:32:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.501 13:32:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.501 13:32:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40359432 kB' 'MemAvailable: 45405972 kB' 'Buffers: 2696 kB' 'Cached: 15533484 kB' 'SwapCached: 0 kB' 'Active: 11453764 kB' 'Inactive: 4569992 kB' 'Active(anon): 10798096 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490764 kB' 'Mapped: 175732 kB' 'Shmem: 10310520 kB' 'KReclaimable: 465316 kB' 'Slab: 840268 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374952 kB' 'KernelStack: 13280 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11956764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198700 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.501 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.501 13:32:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.502 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.502 13:32:35 -- setup/common.sh@33 -- # echo 0 00:03:32.502 13:32:35 -- setup/common.sh@33 -- # return 0 00:03:32.502 13:32:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.502 13:32:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.502 nr_hugepages=1024 00:03:32.502 13:32:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.502 resv_hugepages=0 00:03:32.502 13:32:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.502 surplus_hugepages=0 00:03:32.502 13:32:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.502 anon_hugepages=0 00:03:32.502 13:32:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.502 13:32:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.502 13:32:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.502 13:32:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.502 13:32:35 -- setup/common.sh@18 -- # local node= 00:03:32.502 13:32:35 -- setup/common.sh@19 -- # local var val 00:03:32.502 13:32:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.502 13:32:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.502 13:32:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.502 13:32:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.502 13:32:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.502 13:32:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.502 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541588 kB' 'MemFree: 40358120 kB' 'MemAvailable: 45404660 kB' 'Buffers: 2696 kB' 'Cached: 15533500 kB' 'SwapCached: 0 kB' 'Active: 11453932 kB' 'Inactive: 4569992 kB' 'Active(anon): 10798264 kB' 'Inactive(anon): 0 kB' 'Active(file): 655668 kB' 'Inactive(file): 4569992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490564 kB' 'Mapped: 175732 kB' 'Shmem: 10310536 kB' 'KReclaimable: 465316 kB' 'Slab: 840268 kB' 'SReclaimable: 465316 kB' 'SUnreclaim: 374952 kB' 'KernelStack: 13536 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610820 kB' 'Committed_AS: 11956784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 71808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1986020 kB' 'DirectMap2M: 30439424 kB' 'DirectMap1G: 36700160 kB' 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.503 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.503 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.504 13:32:35 -- setup/common.sh@33 -- # echo 1024 00:03:32.504 13:32:35 -- setup/common.sh@33 -- # return 0 00:03:32.504 13:32:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.504 13:32:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.504 13:32:35 -- setup/hugepages.sh@27 -- # local node 00:03:32.504 13:32:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.504 13:32:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.504 13:32:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.504 13:32:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.504 13:32:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.504 13:32:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.504 13:32:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.504 13:32:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.504 13:32:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.504 13:32:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.504 13:32:35 -- setup/common.sh@18 -- # local node=0 00:03:32.504 13:32:35 -- setup/common.sh@19 -- # local var val 00:03:32.504 13:32:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.504 13:32:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.504 13:32:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.504 13:32:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.504 13:32:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.504 13:32:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829764 kB' 'MemFree: 18411924 kB' 'MemUsed: 14417840 kB' 'SwapCached: 0 kB' 'Active: 7402316 kB' 'Inactive: 4126124 kB' 'Active(anon): 6887940 kB' 'Inactive(anon): 0 kB' 'Active(file): 514376 kB' 'Inactive(file): 4126124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11226228 kB' 'Mapped: 167388 kB' 'AnonPages: 305340 kB' 'Shmem: 6585728 kB' 'KernelStack: 7336 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219868 kB' 'Slab: 391552 kB' 'SReclaimable: 219868 kB' 'SUnreclaim: 171684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.504 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.504 13:32:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # continue 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.505 13:32:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.505 13:32:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.505 13:32:35 -- setup/common.sh@33 -- # echo 0 00:03:32.505 13:32:35 -- setup/common.sh@33 -- # return 0 00:03:32.505 13:32:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.505 13:32:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.505 13:32:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.505 13:32:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.505 13:32:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.505 node0=1024 expecting 1024 00:03:32.505 13:32:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.505 00:03:32.505 real 0m3.556s 00:03:32.505 user 0m1.493s 00:03:32.505 sys 0m1.978s 00:03:32.505 13:32:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.505 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:03:32.505 ************************************ 00:03:32.505 END TEST no_shrink_alloc 00:03:32.505 ************************************ 00:03:32.505 13:32:35 -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.505 13:32:35 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.505 13:32:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.505 13:32:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.505 13:32:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.505 13:32:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.505 13:32:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.505 13:32:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.505 13:32:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.505 13:32:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.505 13:32:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.505 13:32:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.505 13:32:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.505 13:32:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.505 00:03:32.505 real 0m14.346s 00:03:32.505 user 0m5.584s 00:03:32.505 sys 0m7.471s 00:03:32.505 13:32:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.505 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:03:32.505 ************************************ 00:03:32.505 END TEST hugepages 00:03:32.505 ************************************ 00:03:32.505 13:32:35 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:32.505 13:32:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.505 13:32:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.505 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:03:32.505 ************************************ 00:03:32.505 START TEST driver 00:03:32.505 ************************************ 00:03:32.505 13:32:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:32.764 * Looking for test storage... 00:03:32.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:32.764 13:32:35 -- setup/driver.sh@68 -- # setup reset 00:03:32.764 13:32:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.764 13:32:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.296 13:32:38 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:35.296 13:32:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.296 13:32:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.296 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.554 ************************************ 00:03:35.554 START TEST guess_driver 00:03:35.554 ************************************ 00:03:35.554 13:32:38 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:35.554 13:32:38 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:35.554 13:32:38 -- setup/driver.sh@47 -- # local fail=0 00:03:35.554 13:32:38 -- setup/driver.sh@49 -- # pick_driver 00:03:35.554 13:32:38 -- setup/driver.sh@36 -- # vfio 00:03:35.554 13:32:38 -- setup/driver.sh@21 -- # local iommu_grups 00:03:35.554 13:32:38 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:35.554 13:32:38 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:35.554 13:32:38 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:35.554 13:32:38 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:35.554 13:32:38 -- setup/driver.sh@29 -- # (( 187 > 0 )) 00:03:35.554 13:32:38 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:35.554 13:32:38 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:35.554 13:32:38 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:35.554 13:32:38 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:35.554 13:32:38 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:35.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:35.554 13:32:38 -- setup/driver.sh@30 -- # return 0 00:03:35.554 13:32:38 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:35.554 13:32:38 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:35.554 13:32:38 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:35.554 13:32:38 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:35.554 Looking for driver=vfio-pci 00:03:35.554 13:32:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.554 13:32:38 -- setup/driver.sh@45 -- # setup output config 00:03:35.554 13:32:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.554 13:32:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.928 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.928 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.928 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.187 13:32:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.187 13:32:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.187 13:32:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.122 13:32:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.122 13:32:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.122 13:32:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.122 13:32:40 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.122 13:32:40 -- setup/driver.sh@65 -- # setup reset 00:03:38.122 13:32:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.122 13:32:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.433 00:03:41.433 real 0m5.361s 00:03:41.433 user 0m1.322s 00:03:41.433 sys 0m2.155s 00:03:41.433 13:32:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.433 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:03:41.433 ************************************ 00:03:41.433 END TEST guess_driver 00:03:41.433 ************************************ 00:03:41.433 00:03:41.433 real 0m8.311s 00:03:41.433 user 0m2.042s 00:03:41.433 sys 0m3.514s 00:03:41.433 13:32:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.433 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:03:41.433 ************************************ 00:03:41.433 END TEST driver 00:03:41.433 ************************************ 00:03:41.433 13:32:43 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:41.433 13:32:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.433 13:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.433 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:03:41.433 ************************************ 00:03:41.433 START TEST devices 00:03:41.433 ************************************ 00:03:41.433 13:32:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:41.433 * Looking for test storage... 00:03:41.433 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:41.433 13:32:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.433 13:32:43 -- setup/devices.sh@192 -- # setup reset 00:03:41.433 13:32:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.433 13:32:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.813 13:32:45 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:42.813 13:32:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:42.813 13:32:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:42.813 13:32:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:42.813 13:32:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:42.813 13:32:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:42.813 13:32:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:42.813 13:32:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.813 13:32:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:42.813 13:32:45 -- setup/devices.sh@196 -- # blocks=() 00:03:42.813 13:32:45 -- setup/devices.sh@196 -- # declare -a blocks 00:03:42.813 13:32:45 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:42.813 13:32:45 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:42.813 13:32:45 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:42.813 13:32:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.813 13:32:45 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:42.813 13:32:45 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.813 13:32:45 -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:42.813 13:32:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:42.813 13:32:45 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:42.813 13:32:45 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:42.813 13:32:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:42.813 No valid GPT data, bailing 00:03:43.073 13:32:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.073 13:32:45 -- scripts/common.sh@391 -- # pt= 00:03:43.073 13:32:45 -- scripts/common.sh@392 -- # return 1 00:03:43.073 13:32:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:43.073 13:32:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:43.073 13:32:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:43.073 13:32:45 -- setup/common.sh@80 -- # echo 1000204886016 00:03:43.073 13:32:45 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:43.073 13:32:45 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.073 13:32:45 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:43.073 13:32:45 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:43.073 13:32:45 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:43.073 13:32:45 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:43.073 13:32:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.073 13:32:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.073 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:43.073 ************************************ 00:03:43.073 START TEST nvme_mount 00:03:43.073 ************************************ 00:03:43.073 13:32:45 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:43.073 13:32:45 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:43.073 13:32:45 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:43.073 13:32:45 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.073 13:32:45 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.073 13:32:45 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:43.073 13:32:45 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.073 13:32:45 -- setup/common.sh@40 -- # local part_no=1 00:03:43.073 13:32:45 -- setup/common.sh@41 -- # local size=1073741824 00:03:43.073 13:32:45 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.073 13:32:45 -- setup/common.sh@44 -- # parts=() 00:03:43.073 13:32:45 -- setup/common.sh@44 -- # local parts 00:03:43.073 13:32:45 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.073 13:32:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.073 13:32:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.073 13:32:45 -- setup/common.sh@46 -- # (( part++ )) 00:03:43.073 13:32:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.073 13:32:45 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.073 13:32:45 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.073 13:32:45 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:44.008 Creating new GPT entries in memory. 00:03:44.008 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.008 other utilities. 00:03:44.008 13:32:46 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.008 13:32:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.008 13:32:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.008 13:32:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.008 13:32:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.385 Creating new GPT entries in memory. 00:03:45.385 The operation has completed successfully. 00:03:45.385 13:32:47 -- setup/common.sh@57 -- # (( part++ )) 00:03:45.385 13:32:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.385 13:32:47 -- setup/common.sh@62 -- # wait 1003496 00:03:45.385 13:32:47 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.385 13:32:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:45.385 13:32:47 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.385 13:32:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:45.385 13:32:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:45.385 13:32:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.385 13:32:47 -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.385 13:32:47 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:45.385 13:32:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:45.385 13:32:47 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.385 13:32:47 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.385 13:32:47 -- setup/devices.sh@53 -- # local found=0 00:03:45.385 13:32:47 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.385 13:32:47 -- setup/devices.sh@56 -- # : 00:03:45.385 13:32:47 -- setup/devices.sh@59 -- # local pci status 00:03:45.385 13:32:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.385 13:32:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:45.385 13:32:47 -- setup/devices.sh@47 -- # setup output config 00:03:45.385 13:32:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.385 13:32:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:46.762 13:32:49 -- setup/devices.sh@63 -- # found=1 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.762 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.762 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.763 13:32:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.763 13:32:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:46.763 13:32:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.763 13:32:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.763 13:32:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.763 13:32:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:46.763 13:32:49 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.763 13:32:49 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.763 13:32:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.763 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.763 13:32:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.763 13:32:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.022 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:47.023 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:47.023 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:47.023 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:47.023 13:32:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:47.023 13:32:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:47.023 13:32:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.023 13:32:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:47.023 13:32:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:47.023 13:32:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.023 13:32:49 -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.023 13:32:49 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:47.023 13:32:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:47.023 13:32:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.023 13:32:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.023 13:32:49 -- setup/devices.sh@53 -- # local found=0 00:03:47.023 13:32:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.023 13:32:49 -- setup/devices.sh@56 -- # : 00:03:47.023 13:32:49 -- setup/devices.sh@59 -- # local pci status 00:03:47.023 13:32:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.023 13:32:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:47.023 13:32:49 -- setup/devices.sh@47 -- # setup output config 00:03:47.023 13:32:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.023 13:32:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:48.400 13:32:51 -- setup/devices.sh@63 -- # found=1 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.400 13:32:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:48.400 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.659 13:32:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.659 13:32:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:48.659 13:32:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.659 13:32:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.659 13:32:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.659 13:32:51 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.659 13:32:51 -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:48.659 13:32:51 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:48.660 13:32:51 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:48.660 13:32:51 -- setup/devices.sh@50 -- # local mount_point= 00:03:48.660 13:32:51 -- setup/devices.sh@51 -- # local test_file= 00:03:48.660 13:32:51 -- setup/devices.sh@53 -- # local found=0 00:03:48.660 13:32:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.660 13:32:51 -- setup/devices.sh@59 -- # local pci status 00:03:48.660 13:32:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.660 13:32:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:48.660 13:32:51 -- setup/devices.sh@47 -- # setup output config 00:03:48.660 13:32:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.660 13:32:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:50.035 13:32:52 -- setup/devices.sh@63 -- # found=1 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.035 13:32:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.035 13:32:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:50.035 13:32:52 -- setup/devices.sh@68 -- # return 0 00:03:50.035 13:32:52 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:50.035 13:32:52 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.035 13:32:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.035 13:32:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.035 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.035 00:03:50.035 real 0m7.062s 00:03:50.035 user 0m1.757s 00:03:50.035 sys 0m2.873s 00:03:50.035 13:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.035 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:03:50.035 ************************************ 00:03:50.035 END TEST nvme_mount 00:03:50.035 ************************************ 00:03:50.035 13:32:52 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:50.035 13:32:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.035 13:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.035 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:03:50.293 ************************************ 00:03:50.293 START TEST dm_mount 00:03:50.293 ************************************ 00:03:50.293 13:32:52 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:50.293 13:32:52 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:50.293 13:32:52 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:50.293 13:32:52 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:50.293 13:32:52 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:50.293 13:32:52 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.293 13:32:52 -- setup/common.sh@40 -- # local part_no=2 00:03:50.293 13:32:52 -- setup/common.sh@41 -- # local size=1073741824 00:03:50.293 13:32:52 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.293 13:32:52 -- setup/common.sh@44 -- # parts=() 00:03:50.293 13:32:52 -- setup/common.sh@44 -- # local parts 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.293 13:32:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part++ )) 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.293 13:32:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part++ )) 00:03:50.293 13:32:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.293 13:32:52 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.293 13:32:52 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.293 13:32:52 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:51.226 Creating new GPT entries in memory. 00:03:51.226 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.226 other utilities. 00:03:51.226 13:32:53 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.226 13:32:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.226 13:32:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.226 13:32:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.226 13:32:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.161 Creating new GPT entries in memory. 00:03:52.161 The operation has completed successfully. 00:03:52.161 13:32:54 -- setup/common.sh@57 -- # (( part++ )) 00:03:52.161 13:32:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.161 13:32:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.161 13:32:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.161 13:32:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:53.537 The operation has completed successfully. 00:03:53.537 13:32:55 -- setup/common.sh@57 -- # (( part++ )) 00:03:53.537 13:32:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.537 13:32:55 -- setup/common.sh@62 -- # wait 1006179 00:03:53.537 13:32:55 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:53.537 13:32:55 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:53.537 13:32:55 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.537 13:32:55 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:53.537 13:32:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:53.537 13:32:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.537 13:32:56 -- setup/devices.sh@161 -- # break 00:03:53.537 13:32:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.538 13:32:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:53.538 13:32:56 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:53.538 13:32:56 -- setup/devices.sh@166 -- # dm=dm-0 00:03:53.538 13:32:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:53.538 13:32:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:53.538 13:32:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:53.538 13:32:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:03:53.538 13:32:56 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:53.538 13:32:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.538 13:32:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:53.538 13:32:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:53.538 13:32:56 -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.538 13:32:56 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:53.538 13:32:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:53.538 13:32:56 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:53.538 13:32:56 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.538 13:32:56 -- setup/devices.sh@53 -- # local found=0 00:03:53.538 13:32:56 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:53.538 13:32:56 -- setup/devices.sh@56 -- # : 00:03:53.538 13:32:56 -- setup/devices.sh@59 -- # local pci status 00:03:53.538 13:32:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.538 13:32:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:53.538 13:32:56 -- setup/devices.sh@47 -- # setup output config 00:03:53.538 13:32:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.538 13:32:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:54.914 13:32:57 -- setup/devices.sh@63 -- # found=1 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.914 13:32:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:54.914 13:32:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.914 13:32:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.914 13:32:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.914 13:32:57 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:54.914 13:32:57 -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:54.914 13:32:57 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:54.914 13:32:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:54.914 13:32:57 -- setup/devices.sh@50 -- # local mount_point= 00:03:54.914 13:32:57 -- setup/devices.sh@51 -- # local test_file= 00:03:54.914 13:32:57 -- setup/devices.sh@53 -- # local found=0 00:03:54.914 13:32:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.914 13:32:57 -- setup/devices.sh@59 -- # local pci status 00:03:54.914 13:32:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.914 13:32:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:54.914 13:32:57 -- setup/devices.sh@47 -- # setup output config 00:03:54.914 13:32:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.914 13:32:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:56.292 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:56.293 13:32:59 -- setup/devices.sh@63 -- # found=1 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.293 13:32:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:56.293 13:32:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.554 13:32:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.554 13:32:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.554 13:32:59 -- setup/devices.sh@68 -- # return 0 00:03:56.554 13:32:59 -- setup/devices.sh@187 -- # cleanup_dm 00:03:56.554 13:32:59 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:56.554 13:32:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:56.554 13:32:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:56.554 13:32:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.554 13:32:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:56.554 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.554 13:32:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:56.554 13:32:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:56.554 00:03:56.554 real 0m6.317s 00:03:56.554 user 0m1.176s 00:03:56.554 sys 0m1.977s 00:03:56.554 13:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.554 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.554 ************************************ 00:03:56.554 END TEST dm_mount 00:03:56.554 ************************************ 00:03:56.554 13:32:59 -- setup/devices.sh@1 -- # cleanup 00:03:56.554 13:32:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:56.554 13:32:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.554 13:32:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.554 13:32:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.554 13:32:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.554 13:32:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.813 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.813 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.813 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.813 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.813 13:32:59 -- setup/devices.sh@12 -- # cleanup_dm 00:03:56.813 13:32:59 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:56.813 13:32:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:56.813 13:32:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.813 13:32:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:56.813 13:32:59 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.813 13:32:59 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:56.813 00:03:56.813 real 0m15.826s 00:03:56.813 user 0m3.763s 00:03:56.813 sys 0m6.212s 00:03:56.813 13:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.813 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.813 ************************************ 00:03:56.813 END TEST devices 00:03:56.813 ************************************ 00:03:56.813 00:03:56.813 real 0m51.273s 00:03:56.813 user 0m15.534s 00:03:56.813 sys 0m23.933s 00:03:56.813 13:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.813 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.813 ************************************ 00:03:56.813 END TEST setup.sh 00:03:56.813 ************************************ 00:03:56.813 13:32:59 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:58.225 Hugepages 00:03:58.225 node hugesize free / total 00:03:58.225 node0 1048576kB 0 / 0 00:03:58.225 node0 2048kB 2048 / 2048 00:03:58.225 node1 1048576kB 0 / 0 00:03:58.225 node1 2048kB 0 / 0 00:03:58.225 00:03:58.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.225 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:58.225 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:58.225 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:58.225 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:58.225 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:58.225 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:58.483 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:58.483 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:58.483 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:58.483 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:58.483 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:58.483 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:58.483 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:58.484 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:58.484 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:58.484 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:58.484 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:58.484 13:33:01 -- spdk/autotest.sh@130 -- # uname -s 00:03:58.484 13:33:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:58.484 13:33:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:58.484 13:33:01 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.860 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.860 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.860 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.860 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.860 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.118 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.118 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.118 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.118 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.118 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.056 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.056 13:33:03 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:02.433 13:33:04 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:02.433 13:33:04 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:02.433 13:33:04 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.433 13:33:04 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:02.433 13:33:04 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:02.433 13:33:04 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:02.433 13:33:04 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.433 13:33:04 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.433 13:33:04 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:02.433 13:33:04 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:02.433 13:33:04 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:84:00.0 00:04:02.433 13:33:04 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.812 Waiting for block devices as requested 00:04:03.812 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.812 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:03.812 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:03.812 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:04.072 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.072 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.072 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:04.072 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:04.332 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:04.332 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:04.332 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:04.332 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:04.591 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.591 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.591 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:04.591 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:04.851 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:04.851 13:33:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.851 13:33:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1488 -- # grep 0000:84:00.0/nvme/nvme 00:04:04.851 13:33:07 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:04:04.851 13:33:07 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:04.851 13:33:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.851 13:33:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.851 13:33:07 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:04.851 13:33:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.851 13:33:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.851 13:33:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:04.851 13:33:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.851 13:33:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.851 13:33:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.851 13:33:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.851 13:33:07 -- common/autotest_common.sh@1543 -- # continue 00:04:04.851 13:33:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:04.851 13:33:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:04.851 13:33:07 -- common/autotest_common.sh@10 -- # set +x 00:04:04.851 13:33:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:04.851 13:33:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:04.851 13:33:07 -- common/autotest_common.sh@10 -- # set +x 00:04:04.851 13:33:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:06.226 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.486 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.486 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.422 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:07.422 13:33:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:07.422 13:33:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:07.422 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 13:33:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:07.680 13:33:10 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:07.680 13:33:10 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.680 13:33:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:07.680 13:33:10 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:07.680 13:33:10 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:07.680 13:33:10 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:07.680 13:33:10 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:07.680 13:33:10 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.680 13:33:10 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:07.680 13:33:10 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:07.680 13:33:10 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:07.680 13:33:10 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:84:00.0 00:04:07.680 13:33:10 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:07.680 13:33:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:04:07.680 13:33:10 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:07.680 13:33:10 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:07.680 13:33:10 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:07.680 13:33:10 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:84:00.0 00:04:07.680 13:33:10 -- common/autotest_common.sh@1578 -- # [[ -z 0000:84:00.0 ]] 00:04:07.680 13:33:10 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1012775 00:04:07.680 13:33:10 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.680 13:33:10 -- common/autotest_common.sh@1584 -- # waitforlisten 1012775 00:04:07.680 13:33:10 -- common/autotest_common.sh@817 -- # '[' -z 1012775 ']' 00:04:07.680 13:33:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.680 13:33:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:07.680 13:33:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.681 13:33:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:07.681 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:07.681 [2024-04-18 13:33:10.381768] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:07.681 [2024-04-18 13:33:10.381858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012775 ] 00:04:07.681 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.681 [2024-04-18 13:33:10.460703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.939 [2024-04-18 13:33:10.584990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.197 13:33:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:08.197 13:33:10 -- common/autotest_common.sh@850 -- # return 0 00:04:08.197 13:33:10 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:08.197 13:33:10 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:08.197 13:33:10 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:04:11.478 nvme0n1 00:04:11.478 13:33:13 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:11.478 [2024-04-18 13:33:14.255974] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:11.478 [2024-04-18 13:33:14.256034] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:11.478 request: 00:04:11.478 { 00:04:11.478 "nvme_ctrlr_name": "nvme0", 00:04:11.478 "password": "test", 00:04:11.478 "method": "bdev_nvme_opal_revert", 00:04:11.478 "req_id": 1 00:04:11.478 } 00:04:11.478 Got JSON-RPC error response 00:04:11.478 response: 00:04:11.478 { 00:04:11.478 "code": -32603, 00:04:11.478 "message": "Internal error" 00:04:11.478 } 00:04:11.478 13:33:14 -- common/autotest_common.sh@1590 -- # true 00:04:11.478 13:33:14 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:11.478 13:33:14 -- common/autotest_common.sh@1594 -- # killprocess 1012775 00:04:11.478 13:33:14 -- common/autotest_common.sh@936 -- # '[' -z 1012775 ']' 00:04:11.478 13:33:14 -- common/autotest_common.sh@940 -- # kill -0 1012775 00:04:11.478 13:33:14 -- common/autotest_common.sh@941 -- # uname 00:04:11.478 13:33:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:11.478 13:33:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1012775 00:04:11.736 13:33:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:11.736 13:33:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:11.736 13:33:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1012775' 00:04:11.736 killing process with pid 1012775 00:04:11.736 13:33:14 -- common/autotest_common.sh@955 -- # kill 1012775 00:04:11.736 13:33:14 -- common/autotest_common.sh@960 -- # wait 1012775 00:04:13.639 13:33:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:13.639 13:33:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:13.639 13:33:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:13.639 13:33:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:13.639 13:33:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:13.640 13:33:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:13.640 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.640 13:33:16 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:13.640 13:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.640 13:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.640 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.640 ************************************ 00:04:13.640 START TEST env 00:04:13.640 ************************************ 00:04:13.640 13:33:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:13.640 * Looking for test storage... 00:04:13.640 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:13.640 13:33:16 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:13.640 13:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.640 13:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.640 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.640 ************************************ 00:04:13.640 START TEST env_memory 00:04:13.640 ************************************ 00:04:13.640 13:33:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:13.640 00:04:13.640 00:04:13.640 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.640 http://cunit.sourceforge.net/ 00:04:13.640 00:04:13.640 00:04:13.640 Suite: memory 00:04:13.899 Test: alloc and free memory map ...[2024-04-18 13:33:16.480533] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.899 passed 00:04:13.899 Test: mem map translation ...[2024-04-18 13:33:16.523175] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.899 [2024-04-18 13:33:16.523214] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.899 [2024-04-18 13:33:16.523267] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.899 [2024-04-18 13:33:16.523282] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.899 passed 00:04:13.899 Test: mem map registration ...[2024-04-18 13:33:16.620357] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:13.899 [2024-04-18 13:33:16.620412] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:13.899 passed 00:04:14.158 Test: mem map adjacent registrations ...passed 00:04:14.158 00:04:14.158 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.158 suites 1 1 n/a 0 0 00:04:14.158 tests 4 4 4 0 0 00:04:14.158 asserts 152 152 152 0 n/a 00:04:14.158 00:04:14.158 Elapsed time = 0.311 seconds 00:04:14.158 00:04:14.158 real 0m0.323s 00:04:14.158 user 0m0.310s 00:04:14.158 sys 0m0.011s 00:04:14.158 13:33:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.158 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.158 ************************************ 00:04:14.158 END TEST env_memory 00:04:14.158 ************************************ 00:04:14.158 13:33:16 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.158 13:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.158 13:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.158 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.158 ************************************ 00:04:14.158 START TEST env_vtophys 00:04:14.158 ************************************ 00:04:14.158 13:33:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.158 EAL: lib.eal log level changed from notice to debug 00:04:14.158 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.158 EAL: Detected lcore 1 as core 1 on socket 0 00:04:14.158 EAL: Detected lcore 2 as core 2 on socket 0 00:04:14.158 EAL: Detected lcore 3 as core 3 on socket 0 00:04:14.158 EAL: Detected lcore 4 as core 4 on socket 0 00:04:14.158 EAL: Detected lcore 5 as core 5 on socket 0 00:04:14.158 EAL: Detected lcore 6 as core 8 on socket 0 00:04:14.158 EAL: Detected lcore 7 as core 9 on socket 0 00:04:14.158 EAL: Detected lcore 8 as core 10 on socket 0 00:04:14.158 EAL: Detected lcore 9 as core 11 on socket 0 00:04:14.158 EAL: Detected lcore 10 as core 12 on socket 0 00:04:14.158 EAL: Detected lcore 11 as core 13 on socket 0 00:04:14.158 EAL: Detected lcore 12 as core 0 on socket 1 00:04:14.158 EAL: Detected lcore 13 as core 1 on socket 1 00:04:14.158 EAL: Detected lcore 14 as core 2 on socket 1 00:04:14.158 EAL: Detected lcore 15 as core 3 on socket 1 00:04:14.158 EAL: Detected lcore 16 as core 4 on socket 1 00:04:14.158 EAL: Detected lcore 17 as core 5 on socket 1 00:04:14.158 EAL: Detected lcore 18 as core 8 on socket 1 00:04:14.158 EAL: Detected lcore 19 as core 9 on socket 1 00:04:14.158 EAL: Detected lcore 20 as core 10 on socket 1 00:04:14.158 EAL: Detected lcore 21 as core 11 on socket 1 00:04:14.158 EAL: Detected lcore 22 as core 12 on socket 1 00:04:14.158 EAL: Detected lcore 23 as core 13 on socket 1 00:04:14.158 EAL: Detected lcore 24 as core 0 on socket 0 00:04:14.158 EAL: Detected lcore 25 as core 1 on socket 0 00:04:14.158 EAL: Detected lcore 26 as core 2 on socket 0 00:04:14.158 EAL: Detected lcore 27 as core 3 on socket 0 00:04:14.158 EAL: Detected lcore 28 as core 4 on socket 0 00:04:14.158 EAL: Detected lcore 29 as core 5 on socket 0 00:04:14.158 EAL: Detected lcore 30 as core 8 on socket 0 00:04:14.158 EAL: Detected lcore 31 as core 9 on socket 0 00:04:14.158 EAL: Detected lcore 32 as core 10 on socket 0 00:04:14.158 EAL: Detected lcore 33 as core 11 on socket 0 00:04:14.158 EAL: Detected lcore 34 as core 12 on socket 0 00:04:14.158 EAL: Detected lcore 35 as core 13 on socket 0 00:04:14.158 EAL: Detected lcore 36 as core 0 on socket 1 00:04:14.158 EAL: Detected lcore 37 as core 1 on socket 1 00:04:14.158 EAL: Detected lcore 38 as core 2 on socket 1 00:04:14.158 EAL: Detected lcore 39 as core 3 on socket 1 00:04:14.158 EAL: Detected lcore 40 as core 4 on socket 1 00:04:14.158 EAL: Detected lcore 41 as core 5 on socket 1 00:04:14.158 EAL: Detected lcore 42 as core 8 on socket 1 00:04:14.158 EAL: Detected lcore 43 as core 9 on socket 1 00:04:14.159 EAL: Detected lcore 44 as core 10 on socket 1 00:04:14.159 EAL: Detected lcore 45 as core 11 on socket 1 00:04:14.159 EAL: Detected lcore 46 as core 12 on socket 1 00:04:14.159 EAL: Detected lcore 47 as core 13 on socket 1 00:04:14.159 EAL: Maximum logical cores by configuration: 128 00:04:14.159 EAL: Detected CPU lcores: 48 00:04:14.159 EAL: Detected NUMA nodes: 2 00:04:14.159 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.159 EAL: Detected shared linkage of DPDK 00:04:14.159 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.435 EAL: Bus pci wants IOVA as 'DC' 00:04:14.435 EAL: Buses did not request a specific IOVA mode. 00:04:14.435 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:14.435 EAL: Selected IOVA mode 'VA' 00:04:14.435 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.435 EAL: Probing VFIO support... 00:04:14.435 EAL: IOMMU type 1 (Type 1) is supported 00:04:14.435 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:14.435 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:14.435 EAL: VFIO support initialized 00:04:14.435 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.435 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.435 EAL: Setting up physically contiguous memory... 00:04:14.435 EAL: Setting maximum number of open files to 524288 00:04:14.435 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.435 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:14.435 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.435 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.435 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.435 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.435 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.435 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.435 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.435 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.435 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.435 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.435 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.435 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.435 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.435 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.435 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.435 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.435 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.435 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.435 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.435 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.435 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.436 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.436 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.436 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.436 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.436 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:14.436 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.436 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:14.436 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.436 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.436 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:14.436 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:14.436 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.436 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:14.436 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.436 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.436 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:14.436 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:14.436 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.436 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:14.436 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.436 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.436 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:14.436 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:14.436 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.436 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:14.436 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.436 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.436 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:14.436 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:14.436 EAL: Hugepages will be freed exactly as allocated. 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: TSC frequency is ~2700000 KHz 00:04:14.436 EAL: Main lcore 0 is ready (tid=7f0f9edeea00;cpuset=[0]) 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 0 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.436 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.436 00:04:14.436 00:04:14.436 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.436 http://cunit.sourceforge.net/ 00:04:14.436 00:04:14.436 00:04:14.436 Suite: components_suite 00:04:14.436 Test: vtophys_malloc_test ...passed 00:04:14.436 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 4MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 4MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 6MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 6MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 10MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 10MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 18MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 18MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 34MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 34MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 66MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 66MB 00:04:14.436 EAL: Trying to obtain current memory policy. 00:04:14.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.436 EAL: Restoring previous memory policy: 4 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was expanded by 130MB 00:04:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.436 EAL: request: mp_malloc_sync 00:04:14.436 EAL: No shared files mode enabled, IPC is disabled 00:04:14.436 EAL: Heap on socket 0 was shrunk by 130MB 00:04:14.437 EAL: Trying to obtain current memory policy. 00:04:14.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.733 EAL: Restoring previous memory policy: 4 00:04:14.733 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.733 EAL: request: mp_malloc_sync 00:04:14.733 EAL: No shared files mode enabled, IPC is disabled 00:04:14.733 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.733 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.733 EAL: request: mp_malloc_sync 00:04:14.733 EAL: No shared files mode enabled, IPC is disabled 00:04:14.733 EAL: Heap on socket 0 was shrunk by 258MB 00:04:14.733 EAL: Trying to obtain current memory policy. 00:04:14.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.733 EAL: Restoring previous memory policy: 4 00:04:14.733 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.733 EAL: request: mp_malloc_sync 00:04:14.733 EAL: No shared files mode enabled, IPC is disabled 00:04:14.733 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.992 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.992 EAL: request: mp_malloc_sync 00:04:14.992 EAL: No shared files mode enabled, IPC is disabled 00:04:14.992 EAL: Heap on socket 0 was shrunk by 514MB 00:04:14.992 EAL: Trying to obtain current memory policy. 00:04:14.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.250 EAL: Restoring previous memory policy: 4 00:04:15.250 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.250 EAL: request: mp_malloc_sync 00:04:15.250 EAL: No shared files mode enabled, IPC is disabled 00:04:15.250 EAL: Heap on socket 0 was expanded by 1026MB 00:04:15.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.769 EAL: request: mp_malloc_sync 00:04:15.769 EAL: No shared files mode enabled, IPC is disabled 00:04:15.769 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:15.769 passed 00:04:15.769 00:04:15.769 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.769 suites 1 1 n/a 0 0 00:04:15.769 tests 2 2 2 0 0 00:04:15.769 asserts 497 497 497 0 n/a 00:04:15.769 00:04:15.769 Elapsed time = 1.453 seconds 00:04:15.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.769 EAL: request: mp_malloc_sync 00:04:15.769 EAL: No shared files mode enabled, IPC is disabled 00:04:15.769 EAL: Heap on socket 0 was shrunk by 2MB 00:04:15.769 EAL: No shared files mode enabled, IPC is disabled 00:04:15.769 EAL: No shared files mode enabled, IPC is disabled 00:04:15.769 EAL: No shared files mode enabled, IPC is disabled 00:04:15.769 00:04:15.769 real 0m1.642s 00:04:15.769 user 0m0.923s 00:04:15.769 sys 0m0.672s 00:04:15.769 13:33:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.769 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:15.769 ************************************ 00:04:15.769 END TEST env_vtophys 00:04:15.769 ************************************ 00:04:15.769 13:33:18 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:15.769 13:33:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.769 13:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.769 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:16.028 ************************************ 00:04:16.028 START TEST env_pci 00:04:16.028 ************************************ 00:04:16.028 13:33:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.028 00:04:16.028 00:04:16.028 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.028 http://cunit.sourceforge.net/ 00:04:16.028 00:04:16.028 00:04:16.028 Suite: pci 00:04:16.028 Test: pci_hook ...[2024-04-18 13:33:18.689154] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1013824 has claimed it 00:04:16.028 EAL: Cannot find device (10000:00:01.0) 00:04:16.028 EAL: Failed to attach device on primary process 00:04:16.028 passed 00:04:16.028 00:04:16.028 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.028 suites 1 1 n/a 0 0 00:04:16.028 tests 1 1 1 0 0 00:04:16.028 asserts 25 25 25 0 n/a 00:04:16.028 00:04:16.028 Elapsed time = 0.056 seconds 00:04:16.028 00:04:16.028 real 0m0.079s 00:04:16.028 user 0m0.024s 00:04:16.028 sys 0m0.054s 00:04:16.028 13:33:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.028 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:16.028 ************************************ 00:04:16.028 END TEST env_pci 00:04:16.028 ************************************ 00:04:16.028 13:33:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.028 13:33:18 -- env/env.sh@15 -- # uname 00:04:16.028 13:33:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.028 13:33:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.028 13:33:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.028 13:33:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:16.028 13:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.028 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:16.287 ************************************ 00:04:16.287 START TEST env_dpdk_post_init 00:04:16.287 ************************************ 00:04:16.287 13:33:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.287 EAL: Detected CPU lcores: 48 00:04:16.287 EAL: Detected NUMA nodes: 2 00:04:16.287 EAL: Detected shared linkage of DPDK 00:04:16.287 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.287 EAL: Selected IOVA mode 'VA' 00:04:16.287 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.287 EAL: VFIO support initialized 00:04:16.287 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.287 EAL: Using IOMMU type 1 (Type 1) 00:04:16.287 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:16.287 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:16.287 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:16.547 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:17.482 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:04:20.760 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:04:20.760 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:04:20.760 Starting DPDK initialization... 00:04:20.760 Starting SPDK post initialization... 00:04:20.760 SPDK NVMe probe 00:04:20.760 Attaching to 0000:84:00.0 00:04:20.760 Attached to 0000:84:00.0 00:04:20.760 Cleaning up... 00:04:20.760 00:04:20.760 real 0m4.459s 00:04:20.760 user 0m3.280s 00:04:20.760 sys 0m0.237s 00:04:20.760 13:33:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.760 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.760 ************************************ 00:04:20.760 END TEST env_dpdk_post_init 00:04:20.760 ************************************ 00:04:20.760 13:33:23 -- env/env.sh@26 -- # uname 00:04:20.760 13:33:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.760 13:33:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.760 13:33:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.760 13:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.760 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.760 ************************************ 00:04:20.760 START TEST env_mem_callbacks 00:04:20.760 ************************************ 00:04:20.760 13:33:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.760 EAL: Detected CPU lcores: 48 00:04:20.760 EAL: Detected NUMA nodes: 2 00:04:20.760 EAL: Detected shared linkage of DPDK 00:04:20.760 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.760 EAL: Selected IOVA mode 'VA' 00:04:20.760 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.760 EAL: VFIO support initialized 00:04:21.019 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.019 00:04:21.019 00:04:21.019 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.019 http://cunit.sourceforge.net/ 00:04:21.019 00:04:21.019 00:04:21.019 Suite: memory 00:04:21.019 Test: test ... 00:04:21.019 register 0x200000200000 2097152 00:04:21.019 malloc 3145728 00:04:21.019 register 0x200000400000 4194304 00:04:21.019 buf 0x200000500000 len 3145728 PASSED 00:04:21.019 malloc 64 00:04:21.019 buf 0x2000004fff40 len 64 PASSED 00:04:21.019 malloc 4194304 00:04:21.019 register 0x200000800000 6291456 00:04:21.019 buf 0x200000a00000 len 4194304 PASSED 00:04:21.019 free 0x200000500000 3145728 00:04:21.019 free 0x2000004fff40 64 00:04:21.019 unregister 0x200000400000 4194304 PASSED 00:04:21.019 free 0x200000a00000 4194304 00:04:21.019 unregister 0x200000800000 6291456 PASSED 00:04:21.019 malloc 8388608 00:04:21.019 register 0x200000400000 10485760 00:04:21.019 buf 0x200000600000 len 8388608 PASSED 00:04:21.019 free 0x200000600000 8388608 00:04:21.019 unregister 0x200000400000 10485760 PASSED 00:04:21.019 passed 00:04:21.019 00:04:21.019 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.019 suites 1 1 n/a 0 0 00:04:21.019 tests 1 1 1 0 0 00:04:21.019 asserts 15 15 15 0 n/a 00:04:21.019 00:04:21.019 Elapsed time = 0.006 seconds 00:04:21.019 00:04:21.019 real 0m0.057s 00:04:21.019 user 0m0.020s 00:04:21.019 sys 0m0.036s 00:04:21.019 13:33:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.019 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.019 ************************************ 00:04:21.019 END TEST env_mem_callbacks 00:04:21.019 ************************************ 00:04:21.019 00:04:21.019 real 0m7.342s 00:04:21.019 user 0m4.844s 00:04:21.019 sys 0m1.466s 00:04:21.019 13:33:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.019 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.019 ************************************ 00:04:21.019 END TEST env 00:04:21.019 ************************************ 00:04:21.019 13:33:23 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.019 13:33:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.019 13:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.019 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.019 ************************************ 00:04:21.019 START TEST rpc 00:04:21.019 ************************************ 00:04:21.019 13:33:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.019 * Looking for test storage... 00:04:21.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:21.019 13:33:23 -- rpc/rpc.sh@65 -- # spdk_pid=1014555 00:04:21.020 13:33:23 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:21.020 13:33:23 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.020 13:33:23 -- rpc/rpc.sh@67 -- # waitforlisten 1014555 00:04:21.020 13:33:23 -- common/autotest_common.sh@817 -- # '[' -z 1014555 ']' 00:04:21.020 13:33:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.020 13:33:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:21.020 13:33:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.020 13:33:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:21.020 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.278 [2024-04-18 13:33:23.873613] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:21.278 [2024-04-18 13:33:23.873721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014555 ] 00:04:21.278 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.278 [2024-04-18 13:33:23.958217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.278 [2024-04-18 13:33:24.078718] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.278 [2024-04-18 13:33:24.078784] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1014555' to capture a snapshot of events at runtime. 00:04:21.278 [2024-04-18 13:33:24.078800] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.278 [2024-04-18 13:33:24.078813] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.278 [2024-04-18 13:33:24.078825] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1014555 for offline analysis/debug. 00:04:21.278 [2024-04-18 13:33:24.078858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.845 13:33:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:21.845 13:33:24 -- common/autotest_common.sh@850 -- # return 0 00:04:21.845 13:33:24 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:21.845 13:33:24 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:21.845 13:33:24 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:21.845 13:33:24 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:21.845 13:33:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.845 13:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 ************************************ 00:04:21.845 START TEST rpc_integrity 00:04:21.845 ************************************ 00:04:21.845 13:33:24 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:21.845 13:33:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.845 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.845 13:33:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.845 13:33:24 -- rpc/rpc.sh@13 -- # jq length 00:04:21.845 13:33:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.845 13:33:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.845 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.845 13:33:24 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:21.845 13:33:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.845 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.845 13:33:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.845 { 00:04:21.845 "name": "Malloc0", 00:04:21.845 "aliases": [ 00:04:21.845 "518b58df-e363-4112-8d8c-24298870ff34" 00:04:21.845 ], 00:04:21.845 "product_name": "Malloc disk", 00:04:21.845 "block_size": 512, 00:04:21.845 "num_blocks": 16384, 00:04:21.845 "uuid": "518b58df-e363-4112-8d8c-24298870ff34", 00:04:21.845 "assigned_rate_limits": { 00:04:21.845 "rw_ios_per_sec": 0, 00:04:21.845 "rw_mbytes_per_sec": 0, 00:04:21.845 "r_mbytes_per_sec": 0, 00:04:21.845 "w_mbytes_per_sec": 0 00:04:21.845 }, 00:04:21.845 "claimed": false, 00:04:21.845 "zoned": false, 00:04:21.845 "supported_io_types": { 00:04:21.845 "read": true, 00:04:21.845 "write": true, 00:04:21.845 "unmap": true, 00:04:21.845 "write_zeroes": true, 00:04:21.845 "flush": true, 00:04:21.845 "reset": true, 00:04:21.845 "compare": false, 00:04:21.845 "compare_and_write": false, 00:04:21.845 "abort": true, 00:04:21.845 "nvme_admin": false, 00:04:21.845 "nvme_io": false 00:04:21.845 }, 00:04:21.845 "memory_domains": [ 00:04:21.845 { 00:04:21.845 "dma_device_id": "system", 00:04:21.845 "dma_device_type": 1 00:04:21.845 }, 00:04:21.845 { 00:04:21.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.845 "dma_device_type": 2 00:04:21.845 } 00:04:21.845 ], 00:04:21.845 "driver_specific": {} 00:04:21.845 } 00:04:21.845 ]' 00:04:21.845 13:33:24 -- rpc/rpc.sh@17 -- # jq length 00:04:21.845 13:33:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.845 13:33:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:21.845 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 [2024-04-18 13:33:24.578084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:21.845 [2024-04-18 13:33:24.578129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.845 [2024-04-18 13:33:24.578153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc15080 00:04:21.845 [2024-04-18 13:33:24.578168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.845 [2024-04-18 13:33:24.579706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.845 [2024-04-18 13:33:24.579735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.845 Passthru0 00:04:21.845 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.845 13:33:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.845 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.845 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.845 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.845 13:33:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.845 { 00:04:21.845 "name": "Malloc0", 00:04:21.845 "aliases": [ 00:04:21.845 "518b58df-e363-4112-8d8c-24298870ff34" 00:04:21.845 ], 00:04:21.845 "product_name": "Malloc disk", 00:04:21.845 "block_size": 512, 00:04:21.845 "num_blocks": 16384, 00:04:21.845 "uuid": "518b58df-e363-4112-8d8c-24298870ff34", 00:04:21.845 "assigned_rate_limits": { 00:04:21.845 "rw_ios_per_sec": 0, 00:04:21.845 "rw_mbytes_per_sec": 0, 00:04:21.845 "r_mbytes_per_sec": 0, 00:04:21.845 "w_mbytes_per_sec": 0 00:04:21.845 }, 00:04:21.845 "claimed": true, 00:04:21.845 "claim_type": "exclusive_write", 00:04:21.845 "zoned": false, 00:04:21.845 "supported_io_types": { 00:04:21.845 "read": true, 00:04:21.845 "write": true, 00:04:21.845 "unmap": true, 00:04:21.845 "write_zeroes": true, 00:04:21.845 "flush": true, 00:04:21.845 "reset": true, 00:04:21.845 "compare": false, 00:04:21.845 "compare_and_write": false, 00:04:21.845 "abort": true, 00:04:21.845 "nvme_admin": false, 00:04:21.845 "nvme_io": false 00:04:21.845 }, 00:04:21.845 "memory_domains": [ 00:04:21.845 { 00:04:21.845 "dma_device_id": "system", 00:04:21.845 "dma_device_type": 1 00:04:21.845 }, 00:04:21.845 { 00:04:21.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.845 "dma_device_type": 2 00:04:21.845 } 00:04:21.845 ], 00:04:21.845 "driver_specific": {} 00:04:21.845 }, 00:04:21.845 { 00:04:21.845 "name": "Passthru0", 00:04:21.845 "aliases": [ 00:04:21.845 "4ce00f84-db02-5089-b1ef-e0680bb57d01" 00:04:21.845 ], 00:04:21.845 "product_name": "passthru", 00:04:21.845 "block_size": 512, 00:04:21.845 "num_blocks": 16384, 00:04:21.846 "uuid": "4ce00f84-db02-5089-b1ef-e0680bb57d01", 00:04:21.846 "assigned_rate_limits": { 00:04:21.846 "rw_ios_per_sec": 0, 00:04:21.846 "rw_mbytes_per_sec": 0, 00:04:21.846 "r_mbytes_per_sec": 0, 00:04:21.846 "w_mbytes_per_sec": 0 00:04:21.846 }, 00:04:21.846 "claimed": false, 00:04:21.846 "zoned": false, 00:04:21.846 "supported_io_types": { 00:04:21.846 "read": true, 00:04:21.846 "write": true, 00:04:21.846 "unmap": true, 00:04:21.846 "write_zeroes": true, 00:04:21.846 "flush": true, 00:04:21.846 "reset": true, 00:04:21.846 "compare": false, 00:04:21.846 "compare_and_write": false, 00:04:21.846 "abort": true, 00:04:21.846 "nvme_admin": false, 00:04:21.846 "nvme_io": false 00:04:21.846 }, 00:04:21.846 "memory_domains": [ 00:04:21.846 { 00:04:21.846 "dma_device_id": "system", 00:04:21.846 "dma_device_type": 1 00:04:21.846 }, 00:04:21.846 { 00:04:21.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.846 "dma_device_type": 2 00:04:21.846 } 00:04:21.846 ], 00:04:21.846 "driver_specific": { 00:04:21.846 "passthru": { 00:04:21.846 "name": "Passthru0", 00:04:21.846 "base_bdev_name": "Malloc0" 00:04:21.846 } 00:04:21.846 } 00:04:21.846 } 00:04:21.846 ]' 00:04:21.846 13:33:24 -- rpc/rpc.sh@21 -- # jq length 00:04:21.846 13:33:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.846 13:33:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.846 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.846 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.846 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.846 13:33:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.846 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.846 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.104 13:33:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.104 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.104 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.104 13:33:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.104 13:33:24 -- rpc/rpc.sh@26 -- # jq length 00:04:22.104 13:33:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.104 00:04:22.104 real 0m0.237s 00:04:22.104 user 0m0.161s 00:04:22.104 sys 0m0.018s 00:04:22.104 13:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.104 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 ************************************ 00:04:22.104 END TEST rpc_integrity 00:04:22.104 ************************************ 00:04:22.104 13:33:24 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.104 13:33:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.104 13:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.104 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 ************************************ 00:04:22.104 START TEST rpc_plugins 00:04:22.104 ************************************ 00:04:22.104 13:33:24 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:22.104 13:33:24 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.104 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.104 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.104 13:33:24 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.104 13:33:24 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.104 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.104 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.104 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.104 13:33:24 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.104 { 00:04:22.104 "name": "Malloc1", 00:04:22.104 "aliases": [ 00:04:22.104 "95168d4b-6a8c-4bcf-b44f-a73935aa0f32" 00:04:22.104 ], 00:04:22.104 "product_name": "Malloc disk", 00:04:22.104 "block_size": 4096, 00:04:22.104 "num_blocks": 256, 00:04:22.104 "uuid": "95168d4b-6a8c-4bcf-b44f-a73935aa0f32", 00:04:22.104 "assigned_rate_limits": { 00:04:22.104 "rw_ios_per_sec": 0, 00:04:22.104 "rw_mbytes_per_sec": 0, 00:04:22.104 "r_mbytes_per_sec": 0, 00:04:22.104 "w_mbytes_per_sec": 0 00:04:22.104 }, 00:04:22.104 "claimed": false, 00:04:22.104 "zoned": false, 00:04:22.104 "supported_io_types": { 00:04:22.104 "read": true, 00:04:22.104 "write": true, 00:04:22.104 "unmap": true, 00:04:22.104 "write_zeroes": true, 00:04:22.104 "flush": true, 00:04:22.104 "reset": true, 00:04:22.104 "compare": false, 00:04:22.104 "compare_and_write": false, 00:04:22.104 "abort": true, 00:04:22.104 "nvme_admin": false, 00:04:22.104 "nvme_io": false 00:04:22.104 }, 00:04:22.104 "memory_domains": [ 00:04:22.104 { 00:04:22.104 "dma_device_id": "system", 00:04:22.104 "dma_device_type": 1 00:04:22.104 }, 00:04:22.104 { 00:04:22.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.104 "dma_device_type": 2 00:04:22.104 } 00:04:22.104 ], 00:04:22.104 "driver_specific": {} 00:04:22.104 } 00:04:22.104 ]' 00:04:22.104 13:33:24 -- rpc/rpc.sh@32 -- # jq length 00:04:22.363 13:33:24 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.363 13:33:24 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.363 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.363 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.363 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.363 13:33:24 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.363 13:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.363 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.363 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.363 13:33:24 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.363 13:33:24 -- rpc/rpc.sh@36 -- # jq length 00:04:22.363 13:33:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.363 00:04:22.363 real 0m0.122s 00:04:22.363 user 0m0.080s 00:04:22.363 sys 0m0.010s 00:04:22.363 13:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.363 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.363 ************************************ 00:04:22.363 END TEST rpc_plugins 00:04:22.363 ************************************ 00:04:22.363 13:33:25 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.363 13:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.363 13:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.363 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.363 ************************************ 00:04:22.363 START TEST rpc_trace_cmd_test 00:04:22.363 ************************************ 00:04:22.363 13:33:25 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:22.363 13:33:25 -- rpc/rpc.sh@40 -- # local info 00:04:22.363 13:33:25 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.363 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.363 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.363 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.363 13:33:25 -- rpc/rpc.sh@42 -- # info='{ 00:04:22.363 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1014555", 00:04:22.363 "tpoint_group_mask": "0x8", 00:04:22.363 "iscsi_conn": { 00:04:22.363 "mask": "0x2", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "scsi": { 00:04:22.363 "mask": "0x4", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "bdev": { 00:04:22.363 "mask": "0x8", 00:04:22.363 "tpoint_mask": "0xffffffffffffffff" 00:04:22.363 }, 00:04:22.363 "nvmf_rdma": { 00:04:22.363 "mask": "0x10", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "nvmf_tcp": { 00:04:22.363 "mask": "0x20", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "ftl": { 00:04:22.363 "mask": "0x40", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "blobfs": { 00:04:22.363 "mask": "0x80", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.363 }, 00:04:22.363 "dsa": { 00:04:22.363 "mask": "0x200", 00:04:22.363 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "thread": { 00:04:22.364 "mask": "0x400", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "nvme_pcie": { 00:04:22.364 "mask": "0x800", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "iaa": { 00:04:22.364 "mask": "0x1000", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "nvme_tcp": { 00:04:22.364 "mask": "0x2000", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "bdev_nvme": { 00:04:22.364 "mask": "0x4000", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 }, 00:04:22.364 "sock": { 00:04:22.364 "mask": "0x8000", 00:04:22.364 "tpoint_mask": "0x0" 00:04:22.364 } 00:04:22.364 }' 00:04:22.364 13:33:25 -- rpc/rpc.sh@43 -- # jq length 00:04:22.622 13:33:25 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:22.622 13:33:25 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:22.622 13:33:25 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:22.622 13:33:25 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:22.622 13:33:25 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:22.622 13:33:25 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:22.622 13:33:25 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:22.622 13:33:25 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:22.622 13:33:25 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:22.622 00:04:22.622 real 0m0.208s 00:04:22.622 user 0m0.187s 00:04:22.622 sys 0m0.013s 00:04:22.622 13:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.622 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.622 ************************************ 00:04:22.622 END TEST rpc_trace_cmd_test 00:04:22.622 ************************************ 00:04:22.622 13:33:25 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:22.622 13:33:25 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:22.622 13:33:25 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:22.622 13:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.622 13:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.622 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.880 ************************************ 00:04:22.880 START TEST rpc_daemon_integrity 00:04:22.880 ************************************ 00:04:22.880 13:33:25 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:22.880 13:33:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.880 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.880 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.880 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.880 13:33:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.880 13:33:25 -- rpc/rpc.sh@13 -- # jq length 00:04:22.880 13:33:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.880 13:33:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.880 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.880 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.880 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.880 13:33:25 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:22.880 13:33:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.880 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.880 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.880 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.880 13:33:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.880 { 00:04:22.880 "name": "Malloc2", 00:04:22.880 "aliases": [ 00:04:22.880 "95a622ff-84c3-4fd3-84f3-9e6b5d7bd8bd" 00:04:22.880 ], 00:04:22.880 "product_name": "Malloc disk", 00:04:22.880 "block_size": 512, 00:04:22.880 "num_blocks": 16384, 00:04:22.880 "uuid": "95a622ff-84c3-4fd3-84f3-9e6b5d7bd8bd", 00:04:22.880 "assigned_rate_limits": { 00:04:22.880 "rw_ios_per_sec": 0, 00:04:22.880 "rw_mbytes_per_sec": 0, 00:04:22.881 "r_mbytes_per_sec": 0, 00:04:22.881 "w_mbytes_per_sec": 0 00:04:22.881 }, 00:04:22.881 "claimed": false, 00:04:22.881 "zoned": false, 00:04:22.881 "supported_io_types": { 00:04:22.881 "read": true, 00:04:22.881 "write": true, 00:04:22.881 "unmap": true, 00:04:22.881 "write_zeroes": true, 00:04:22.881 "flush": true, 00:04:22.881 "reset": true, 00:04:22.881 "compare": false, 00:04:22.881 "compare_and_write": false, 00:04:22.881 "abort": true, 00:04:22.881 "nvme_admin": false, 00:04:22.881 "nvme_io": false 00:04:22.881 }, 00:04:22.881 "memory_domains": [ 00:04:22.881 { 00:04:22.881 "dma_device_id": "system", 00:04:22.881 "dma_device_type": 1 00:04:22.881 }, 00:04:22.881 { 00:04:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.881 "dma_device_type": 2 00:04:22.881 } 00:04:22.881 ], 00:04:22.881 "driver_specific": {} 00:04:22.881 } 00:04:22.881 ]' 00:04:22.881 13:33:25 -- rpc/rpc.sh@17 -- # jq length 00:04:22.881 13:33:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.881 13:33:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:22.881 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.881 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 [2024-04-18 13:33:25.613479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:22.881 [2024-04-18 13:33:25.613535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.881 [2024-04-18 13:33:25.613577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc14d40 00:04:22.881 [2024-04-18 13:33:25.613603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.881 [2024-04-18 13:33:25.615036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.881 [2024-04-18 13:33:25.615066] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.881 Passthru0 00:04:22.881 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.881 13:33:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.881 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.881 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.881 13:33:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.881 { 00:04:22.881 "name": "Malloc2", 00:04:22.881 "aliases": [ 00:04:22.881 "95a622ff-84c3-4fd3-84f3-9e6b5d7bd8bd" 00:04:22.881 ], 00:04:22.881 "product_name": "Malloc disk", 00:04:22.881 "block_size": 512, 00:04:22.881 "num_blocks": 16384, 00:04:22.881 "uuid": "95a622ff-84c3-4fd3-84f3-9e6b5d7bd8bd", 00:04:22.881 "assigned_rate_limits": { 00:04:22.881 "rw_ios_per_sec": 0, 00:04:22.881 "rw_mbytes_per_sec": 0, 00:04:22.881 "r_mbytes_per_sec": 0, 00:04:22.881 "w_mbytes_per_sec": 0 00:04:22.881 }, 00:04:22.881 "claimed": true, 00:04:22.881 "claim_type": "exclusive_write", 00:04:22.881 "zoned": false, 00:04:22.881 "supported_io_types": { 00:04:22.881 "read": true, 00:04:22.881 "write": true, 00:04:22.881 "unmap": true, 00:04:22.881 "write_zeroes": true, 00:04:22.881 "flush": true, 00:04:22.881 "reset": true, 00:04:22.881 "compare": false, 00:04:22.881 "compare_and_write": false, 00:04:22.881 "abort": true, 00:04:22.881 "nvme_admin": false, 00:04:22.881 "nvme_io": false 00:04:22.881 }, 00:04:22.881 "memory_domains": [ 00:04:22.881 { 00:04:22.881 "dma_device_id": "system", 00:04:22.881 "dma_device_type": 1 00:04:22.881 }, 00:04:22.881 { 00:04:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.881 "dma_device_type": 2 00:04:22.881 } 00:04:22.881 ], 00:04:22.881 "driver_specific": {} 00:04:22.881 }, 00:04:22.881 { 00:04:22.881 "name": "Passthru0", 00:04:22.881 "aliases": [ 00:04:22.881 "d26c94a1-4847-5e39-8b15-4ae9b9d05b4f" 00:04:22.881 ], 00:04:22.881 "product_name": "passthru", 00:04:22.881 "block_size": 512, 00:04:22.881 "num_blocks": 16384, 00:04:22.881 "uuid": "d26c94a1-4847-5e39-8b15-4ae9b9d05b4f", 00:04:22.881 "assigned_rate_limits": { 00:04:22.881 "rw_ios_per_sec": 0, 00:04:22.881 "rw_mbytes_per_sec": 0, 00:04:22.881 "r_mbytes_per_sec": 0, 00:04:22.881 "w_mbytes_per_sec": 0 00:04:22.881 }, 00:04:22.881 "claimed": false, 00:04:22.881 "zoned": false, 00:04:22.881 "supported_io_types": { 00:04:22.881 "read": true, 00:04:22.881 "write": true, 00:04:22.881 "unmap": true, 00:04:22.881 "write_zeroes": true, 00:04:22.881 "flush": true, 00:04:22.881 "reset": true, 00:04:22.881 "compare": false, 00:04:22.881 "compare_and_write": false, 00:04:22.881 "abort": true, 00:04:22.881 "nvme_admin": false, 00:04:22.881 "nvme_io": false 00:04:22.881 }, 00:04:22.881 "memory_domains": [ 00:04:22.881 { 00:04:22.881 "dma_device_id": "system", 00:04:22.881 "dma_device_type": 1 00:04:22.881 }, 00:04:22.881 { 00:04:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.881 "dma_device_type": 2 00:04:22.881 } 00:04:22.881 ], 00:04:22.881 "driver_specific": { 00:04:22.881 "passthru": { 00:04:22.881 "name": "Passthru0", 00:04:22.881 "base_bdev_name": "Malloc2" 00:04:22.881 } 00:04:22.881 } 00:04:22.881 } 00:04:22.881 ]' 00:04:22.881 13:33:25 -- rpc/rpc.sh@21 -- # jq length 00:04:22.881 13:33:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.881 13:33:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.881 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.881 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:22.881 13:33:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:22.881 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.881 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:23.139 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:23.139 13:33:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.139 13:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:23.139 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:23.139 13:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:23.139 13:33:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.139 13:33:25 -- rpc/rpc.sh@26 -- # jq length 00:04:23.139 13:33:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.139 00:04:23.139 real 0m0.238s 00:04:23.139 user 0m0.166s 00:04:23.139 sys 0m0.015s 00:04:23.139 13:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.139 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:23.139 ************************************ 00:04:23.139 END TEST rpc_daemon_integrity 00:04:23.139 ************************************ 00:04:23.139 13:33:25 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.139 13:33:25 -- rpc/rpc.sh@84 -- # killprocess 1014555 00:04:23.139 13:33:25 -- common/autotest_common.sh@936 -- # '[' -z 1014555 ']' 00:04:23.139 13:33:25 -- common/autotest_common.sh@940 -- # kill -0 1014555 00:04:23.139 13:33:25 -- common/autotest_common.sh@941 -- # uname 00:04:23.139 13:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.139 13:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1014555 00:04:23.139 13:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.139 13:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.139 13:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1014555' 00:04:23.139 killing process with pid 1014555 00:04:23.139 13:33:25 -- common/autotest_common.sh@955 -- # kill 1014555 00:04:23.139 13:33:25 -- common/autotest_common.sh@960 -- # wait 1014555 00:04:23.705 00:04:23.705 real 0m2.535s 00:04:23.705 user 0m3.215s 00:04:23.705 sys 0m0.848s 00:04:23.705 13:33:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.705 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.705 ************************************ 00:04:23.705 END TEST rpc 00:04:23.705 ************************************ 00:04:23.705 13:33:26 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.705 13:33:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.705 13:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.705 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.705 ************************************ 00:04:23.705 START TEST skip_rpc 00:04:23.705 ************************************ 00:04:23.705 13:33:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:23.705 * Looking for test storage... 00:04:23.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:23.705 13:33:26 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:23.705 13:33:26 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:23.705 13:33:26 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.705 13:33:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.705 13:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.705 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.963 ************************************ 00:04:23.963 START TEST skip_rpc 00:04:23.963 ************************************ 00:04:23.963 13:33:26 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:23.963 13:33:26 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1015104 00:04:23.963 13:33:26 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.963 13:33:26 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.963 13:33:26 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.963 [2024-04-18 13:33:26.674300] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:23.964 [2024-04-18 13:33:26.674457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015104 ] 00:04:23.964 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.222 [2024-04-18 13:33:26.780083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.222 [2024-04-18 13:33:26.902173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.486 13:33:31 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.486 13:33:31 -- common/autotest_common.sh@638 -- # local es=0 00:04:29.486 13:33:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.486 13:33:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:29.486 13:33:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.486 13:33:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:29.486 13:33:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.486 13:33:31 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:29.486 13:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.486 13:33:31 -- common/autotest_common.sh@10 -- # set +x 00:04:29.486 13:33:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:29.486 13:33:31 -- common/autotest_common.sh@641 -- # es=1 00:04:29.486 13:33:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:29.486 13:33:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:29.486 13:33:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:29.486 13:33:31 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.486 13:33:31 -- rpc/skip_rpc.sh@23 -- # killprocess 1015104 00:04:29.486 13:33:31 -- common/autotest_common.sh@936 -- # '[' -z 1015104 ']' 00:04:29.486 13:33:31 -- common/autotest_common.sh@940 -- # kill -0 1015104 00:04:29.486 13:33:31 -- common/autotest_common.sh@941 -- # uname 00:04:29.486 13:33:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.486 13:33:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1015104 00:04:29.486 13:33:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.486 13:33:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.486 13:33:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1015104' 00:04:29.486 killing process with pid 1015104 00:04:29.486 13:33:31 -- common/autotest_common.sh@955 -- # kill 1015104 00:04:29.486 13:33:31 -- common/autotest_common.sh@960 -- # wait 1015104 00:04:29.486 00:04:29.486 real 0m5.544s 00:04:29.486 user 0m5.167s 00:04:29.486 sys 0m0.401s 00:04:29.486 13:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.486 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.486 ************************************ 00:04:29.486 END TEST skip_rpc 00:04:29.486 ************************************ 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.486 13:33:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.486 13:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.486 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.486 ************************************ 00:04:29.486 START TEST skip_rpc_with_json 00:04:29.486 ************************************ 00:04:29.486 13:33:32 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1015803 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.486 13:33:32 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1015803 00:04:29.486 13:33:32 -- common/autotest_common.sh@817 -- # '[' -z 1015803 ']' 00:04:29.486 13:33:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.486 13:33:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.486 13:33:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.486 13:33:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.486 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.744 [2024-04-18 13:33:32.321400] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:29.744 [2024-04-18 13:33:32.321494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015803 ] 00:04:29.744 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.744 [2024-04-18 13:33:32.401564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.744 [2024-04-18 13:33:32.522989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.002 13:33:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.002 13:33:32 -- common/autotest_common.sh@850 -- # return 0 00:04:30.002 13:33:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.002 13:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.002 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:30.002 [2024-04-18 13:33:32.802177] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.260 request: 00:04:30.260 { 00:04:30.260 "trtype": "tcp", 00:04:30.260 "method": "nvmf_get_transports", 00:04:30.260 "req_id": 1 00:04:30.260 } 00:04:30.260 Got JSON-RPC error response 00:04:30.260 response: 00:04:30.260 { 00:04:30.260 "code": -19, 00:04:30.260 "message": "No such device" 00:04:30.260 } 00:04:30.260 13:33:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:30.260 13:33:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.260 13:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.260 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:30.260 [2024-04-18 13:33:32.810289] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.260 13:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.260 13:33:32 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.260 13:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.260 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:30.260 13:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.260 13:33:32 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:30.260 { 00:04:30.260 "subsystems": [ 00:04:30.260 { 00:04:30.260 "subsystem": "keyring", 00:04:30.260 "config": [] 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "subsystem": "iobuf", 00:04:30.260 "config": [ 00:04:30.260 { 00:04:30.260 "method": "iobuf_set_options", 00:04:30.260 "params": { 00:04:30.260 "small_pool_count": 8192, 00:04:30.260 "large_pool_count": 1024, 00:04:30.260 "small_bufsize": 8192, 00:04:30.260 "large_bufsize": 135168 00:04:30.260 } 00:04:30.260 } 00:04:30.260 ] 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "subsystem": "sock", 00:04:30.260 "config": [ 00:04:30.260 { 00:04:30.260 "method": "sock_impl_set_options", 00:04:30.260 "params": { 00:04:30.260 "impl_name": "posix", 00:04:30.260 "recv_buf_size": 2097152, 00:04:30.260 "send_buf_size": 2097152, 00:04:30.260 "enable_recv_pipe": true, 00:04:30.260 "enable_quickack": false, 00:04:30.260 "enable_placement_id": 0, 00:04:30.260 "enable_zerocopy_send_server": true, 00:04:30.260 "enable_zerocopy_send_client": false, 00:04:30.260 "zerocopy_threshold": 0, 00:04:30.260 "tls_version": 0, 00:04:30.260 "enable_ktls": false 00:04:30.260 } 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "method": "sock_impl_set_options", 00:04:30.260 "params": { 00:04:30.260 "impl_name": "ssl", 00:04:30.260 "recv_buf_size": 4096, 00:04:30.260 "send_buf_size": 4096, 00:04:30.260 "enable_recv_pipe": true, 00:04:30.260 "enable_quickack": false, 00:04:30.260 "enable_placement_id": 0, 00:04:30.260 "enable_zerocopy_send_server": true, 00:04:30.260 "enable_zerocopy_send_client": false, 00:04:30.260 "zerocopy_threshold": 0, 00:04:30.260 "tls_version": 0, 00:04:30.260 "enable_ktls": false 00:04:30.260 } 00:04:30.260 } 00:04:30.260 ] 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "subsystem": "vmd", 00:04:30.260 "config": [] 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "subsystem": "accel", 00:04:30.260 "config": [ 00:04:30.260 { 00:04:30.260 "method": "accel_set_options", 00:04:30.260 "params": { 00:04:30.260 "small_cache_size": 128, 00:04:30.260 "large_cache_size": 16, 00:04:30.260 "task_count": 2048, 00:04:30.260 "sequence_count": 2048, 00:04:30.260 "buf_count": 2048 00:04:30.260 } 00:04:30.260 } 00:04:30.260 ] 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "subsystem": "bdev", 00:04:30.260 "config": [ 00:04:30.260 { 00:04:30.260 "method": "bdev_set_options", 00:04:30.260 "params": { 00:04:30.260 "bdev_io_pool_size": 65535, 00:04:30.260 "bdev_io_cache_size": 256, 00:04:30.260 "bdev_auto_examine": true, 00:04:30.260 "iobuf_small_cache_size": 128, 00:04:30.260 "iobuf_large_cache_size": 16 00:04:30.260 } 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "method": "bdev_raid_set_options", 00:04:30.260 "params": { 00:04:30.260 "process_window_size_kb": 1024 00:04:30.260 } 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "method": "bdev_iscsi_set_options", 00:04:30.260 "params": { 00:04:30.260 "timeout_sec": 30 00:04:30.260 } 00:04:30.260 }, 00:04:30.260 { 00:04:30.260 "method": "bdev_nvme_set_options", 00:04:30.260 "params": { 00:04:30.260 "action_on_timeout": "none", 00:04:30.260 "timeout_us": 0, 00:04:30.260 "timeout_admin_us": 0, 00:04:30.261 "keep_alive_timeout_ms": 10000, 00:04:30.261 "arbitration_burst": 0, 00:04:30.261 "low_priority_weight": 0, 00:04:30.261 "medium_priority_weight": 0, 00:04:30.261 "high_priority_weight": 0, 00:04:30.261 "nvme_adminq_poll_period_us": 10000, 00:04:30.261 "nvme_ioq_poll_period_us": 0, 00:04:30.261 "io_queue_requests": 0, 00:04:30.261 "delay_cmd_submit": true, 00:04:30.261 "transport_retry_count": 4, 00:04:30.261 "bdev_retry_count": 3, 00:04:30.261 "transport_ack_timeout": 0, 00:04:30.261 "ctrlr_loss_timeout_sec": 0, 00:04:30.261 "reconnect_delay_sec": 0, 00:04:30.261 "fast_io_fail_timeout_sec": 0, 00:04:30.261 "disable_auto_failback": false, 00:04:30.261 "generate_uuids": false, 00:04:30.261 "transport_tos": 0, 00:04:30.261 "nvme_error_stat": false, 00:04:30.261 "rdma_srq_size": 0, 00:04:30.261 "io_path_stat": false, 00:04:30.261 "allow_accel_sequence": false, 00:04:30.261 "rdma_max_cq_size": 0, 00:04:30.261 "rdma_cm_event_timeout_ms": 0, 00:04:30.261 "dhchap_digests": [ 00:04:30.261 "sha256", 00:04:30.261 "sha384", 00:04:30.261 "sha512" 00:04:30.261 ], 00:04:30.261 "dhchap_dhgroups": [ 00:04:30.261 "null", 00:04:30.261 "ffdhe2048", 00:04:30.261 "ffdhe3072", 00:04:30.261 "ffdhe4096", 00:04:30.261 "ffdhe6144", 00:04:30.261 "ffdhe8192" 00:04:30.261 ] 00:04:30.261 } 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "method": "bdev_nvme_set_hotplug", 00:04:30.261 "params": { 00:04:30.261 "period_us": 100000, 00:04:30.261 "enable": false 00:04:30.261 } 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "method": "bdev_wait_for_examine" 00:04:30.261 } 00:04:30.261 ] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "scsi", 00:04:30.261 "config": null 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "scheduler", 00:04:30.261 "config": [ 00:04:30.261 { 00:04:30.261 "method": "framework_set_scheduler", 00:04:30.261 "params": { 00:04:30.261 "name": "static" 00:04:30.261 } 00:04:30.261 } 00:04:30.261 ] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "vhost_scsi", 00:04:30.261 "config": [] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "vhost_blk", 00:04:30.261 "config": [] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "ublk", 00:04:30.261 "config": [] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "nbd", 00:04:30.261 "config": [] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "nvmf", 00:04:30.261 "config": [ 00:04:30.261 { 00:04:30.261 "method": "nvmf_set_config", 00:04:30.261 "params": { 00:04:30.261 "discovery_filter": "match_any", 00:04:30.261 "admin_cmd_passthru": { 00:04:30.261 "identify_ctrlr": false 00:04:30.261 } 00:04:30.261 } 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "method": "nvmf_set_max_subsystems", 00:04:30.261 "params": { 00:04:30.261 "max_subsystems": 1024 00:04:30.261 } 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "method": "nvmf_set_crdt", 00:04:30.261 "params": { 00:04:30.261 "crdt1": 0, 00:04:30.261 "crdt2": 0, 00:04:30.261 "crdt3": 0 00:04:30.261 } 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "method": "nvmf_create_transport", 00:04:30.261 "params": { 00:04:30.261 "trtype": "TCP", 00:04:30.261 "max_queue_depth": 128, 00:04:30.261 "max_io_qpairs_per_ctrlr": 127, 00:04:30.261 "in_capsule_data_size": 4096, 00:04:30.261 "max_io_size": 131072, 00:04:30.261 "io_unit_size": 131072, 00:04:30.261 "max_aq_depth": 128, 00:04:30.261 "num_shared_buffers": 511, 00:04:30.261 "buf_cache_size": 4294967295, 00:04:30.261 "dif_insert_or_strip": false, 00:04:30.261 "zcopy": false, 00:04:30.261 "c2h_success": true, 00:04:30.261 "sock_priority": 0, 00:04:30.261 "abort_timeout_sec": 1, 00:04:30.261 "ack_timeout": 0 00:04:30.261 } 00:04:30.261 } 00:04:30.261 ] 00:04:30.261 }, 00:04:30.261 { 00:04:30.261 "subsystem": "iscsi", 00:04:30.261 "config": [ 00:04:30.261 { 00:04:30.261 "method": "iscsi_set_options", 00:04:30.261 "params": { 00:04:30.261 "node_base": "iqn.2016-06.io.spdk", 00:04:30.261 "max_sessions": 128, 00:04:30.261 "max_connections_per_session": 2, 00:04:30.261 "max_queue_depth": 64, 00:04:30.261 "default_time2wait": 2, 00:04:30.261 "default_time2retain": 20, 00:04:30.261 "first_burst_length": 8192, 00:04:30.261 "immediate_data": true, 00:04:30.261 "allow_duplicated_isid": false, 00:04:30.261 "error_recovery_level": 0, 00:04:30.261 "nop_timeout": 60, 00:04:30.261 "nop_in_interval": 30, 00:04:30.261 "disable_chap": false, 00:04:30.261 "require_chap": false, 00:04:30.261 "mutual_chap": false, 00:04:30.261 "chap_group": 0, 00:04:30.261 "max_large_datain_per_connection": 64, 00:04:30.261 "max_r2t_per_connection": 4, 00:04:30.261 "pdu_pool_size": 36864, 00:04:30.261 "immediate_data_pool_size": 16384, 00:04:30.261 "data_out_pool_size": 2048 00:04:30.261 } 00:04:30.261 } 00:04:30.261 ] 00:04:30.261 } 00:04:30.261 ] 00:04:30.261 } 00:04:30.261 13:33:32 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.261 13:33:32 -- rpc/skip_rpc.sh@40 -- # killprocess 1015803 00:04:30.261 13:33:32 -- common/autotest_common.sh@936 -- # '[' -z 1015803 ']' 00:04:30.261 13:33:32 -- common/autotest_common.sh@940 -- # kill -0 1015803 00:04:30.261 13:33:32 -- common/autotest_common.sh@941 -- # uname 00:04:30.261 13:33:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:30.261 13:33:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1015803 00:04:30.261 13:33:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:30.261 13:33:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:30.261 13:33:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1015803' 00:04:30.261 killing process with pid 1015803 00:04:30.261 13:33:32 -- common/autotest_common.sh@955 -- # kill 1015803 00:04:30.261 13:33:32 -- common/autotest_common.sh@960 -- # wait 1015803 00:04:30.827 13:33:33 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1015943 00:04:30.827 13:33:33 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:30.827 13:33:33 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.131 13:33:38 -- rpc/skip_rpc.sh@50 -- # killprocess 1015943 00:04:36.131 13:33:38 -- common/autotest_common.sh@936 -- # '[' -z 1015943 ']' 00:04:36.131 13:33:38 -- common/autotest_common.sh@940 -- # kill -0 1015943 00:04:36.131 13:33:38 -- common/autotest_common.sh@941 -- # uname 00:04:36.131 13:33:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.131 13:33:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1015943 00:04:36.131 13:33:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.131 13:33:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.131 13:33:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1015943' 00:04:36.131 killing process with pid 1015943 00:04:36.131 13:33:38 -- common/autotest_common.sh@955 -- # kill 1015943 00:04:36.131 13:33:38 -- common/autotest_common.sh@960 -- # wait 1015943 00:04:36.389 13:33:38 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:36.389 13:33:38 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:36.389 00:04:36.389 real 0m6.727s 00:04:36.389 user 0m6.306s 00:04:36.389 sys 0m0.745s 00:04:36.389 13:33:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.389 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 END TEST skip_rpc_with_json 00:04:36.389 ************************************ 00:04:36.389 13:33:39 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:36.389 13:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.389 13:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.389 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 START TEST skip_rpc_with_delay 00:04:36.389 ************************************ 00:04:36.389 13:33:39 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:36.389 13:33:39 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.389 13:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:04:36.389 13:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.389 13:33:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.389 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:36.389 13:33:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.389 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:36.389 13:33:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.389 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:36.389 13:33:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.389 13:33:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.389 13:33:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.648 [2024-04-18 13:33:39.194420] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:36.648 [2024-04-18 13:33:39.194562] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:36.648 13:33:39 -- common/autotest_common.sh@641 -- # es=1 00:04:36.648 13:33:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:36.648 13:33:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:36.648 13:33:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:36.648 00:04:36.648 real 0m0.079s 00:04:36.648 user 0m0.052s 00:04:36.648 sys 0m0.027s 00:04:36.648 13:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.648 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 ************************************ 00:04:36.648 END TEST skip_rpc_with_delay 00:04:36.648 ************************************ 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@77 -- # uname 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:36.648 13:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.648 13:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.648 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 ************************************ 00:04:36.648 START TEST exit_on_failed_rpc_init 00:04:36.648 ************************************ 00:04:36.648 13:33:39 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1016682 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.648 13:33:39 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1016682 00:04:36.648 13:33:39 -- common/autotest_common.sh@817 -- # '[' -z 1016682 ']' 00:04:36.648 13:33:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.648 13:33:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.648 13:33:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.648 13:33:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.648 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 [2024-04-18 13:33:39.410892] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:36.648 [2024-04-18 13:33:39.411027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016682 ] 00:04:36.907 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.907 [2024-04-18 13:33:39.494809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.907 [2024-04-18 13:33:39.619053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.165 13:33:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.165 13:33:39 -- common/autotest_common.sh@850 -- # return 0 00:04:37.165 13:33:39 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.165 13:33:39 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.165 13:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:04:37.165 13:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.165 13:33:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.165 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.165 13:33:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.165 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.165 13:33:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.165 13:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.165 13:33:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.165 13:33:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.165 13:33:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.165 [2024-04-18 13:33:39.958317] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:37.165 [2024-04-18 13:33:39.958410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016753 ] 00:04:37.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.423 [2024-04-18 13:33:40.037810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.423 [2024-04-18 13:33:40.159465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.423 [2024-04-18 13:33:40.159590] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.423 [2024-04-18 13:33:40.159612] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.423 [2024-04-18 13:33:40.159625] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.682 13:33:40 -- common/autotest_common.sh@641 -- # es=234 00:04:37.682 13:33:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:37.682 13:33:40 -- common/autotest_common.sh@650 -- # es=106 00:04:37.682 13:33:40 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:37.682 13:33:40 -- common/autotest_common.sh@658 -- # es=1 00:04:37.682 13:33:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:37.682 13:33:40 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.682 13:33:40 -- rpc/skip_rpc.sh@70 -- # killprocess 1016682 00:04:37.682 13:33:40 -- common/autotest_common.sh@936 -- # '[' -z 1016682 ']' 00:04:37.682 13:33:40 -- common/autotest_common.sh@940 -- # kill -0 1016682 00:04:37.682 13:33:40 -- common/autotest_common.sh@941 -- # uname 00:04:37.682 13:33:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.682 13:33:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1016682 00:04:37.682 13:33:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.682 13:33:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.682 13:33:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1016682' 00:04:37.682 killing process with pid 1016682 00:04:37.682 13:33:40 -- common/autotest_common.sh@955 -- # kill 1016682 00:04:37.682 13:33:40 -- common/autotest_common.sh@960 -- # wait 1016682 00:04:38.255 00:04:38.255 real 0m1.453s 00:04:38.255 user 0m1.636s 00:04:38.255 sys 0m0.500s 00:04:38.255 13:33:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.255 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.255 ************************************ 00:04:38.255 END TEST exit_on_failed_rpc_init 00:04:38.255 ************************************ 00:04:38.255 13:33:40 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:38.255 00:04:38.255 real 0m14.412s 00:04:38.255 user 0m13.402s 00:04:38.255 sys 0m2.009s 00:04:38.255 13:33:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.255 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.255 ************************************ 00:04:38.255 END TEST skip_rpc 00:04:38.255 ************************************ 00:04:38.255 13:33:40 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.255 13:33:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.255 13:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.255 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.255 ************************************ 00:04:38.255 START TEST rpc_client 00:04:38.255 ************************************ 00:04:38.255 13:33:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:38.255 * Looking for test storage... 00:04:38.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:38.255 13:33:41 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:38.255 OK 00:04:38.255 13:33:41 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.255 00:04:38.255 real 0m0.082s 00:04:38.255 user 0m0.036s 00:04:38.255 sys 0m0.052s 00:04:38.255 13:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.255 13:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.255 ************************************ 00:04:38.255 END TEST rpc_client 00:04:38.255 ************************************ 00:04:38.514 13:33:41 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.514 13:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.514 13:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.514 13:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.514 ************************************ 00:04:38.514 START TEST json_config 00:04:38.514 ************************************ 00:04:38.514 13:33:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.514 13:33:41 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.514 13:33:41 -- nvmf/common.sh@7 -- # uname -s 00:04:38.514 13:33:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.514 13:33:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.514 13:33:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.514 13:33:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.514 13:33:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.514 13:33:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.514 13:33:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.514 13:33:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.514 13:33:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.514 13:33:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.514 13:33:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:04:38.514 13:33:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:04:38.514 13:33:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.514 13:33:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.514 13:33:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.514 13:33:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.514 13:33:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:38.514 13:33:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.514 13:33:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.514 13:33:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.514 13:33:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.514 13:33:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.515 13:33:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.515 13:33:41 -- paths/export.sh@5 -- # export PATH 00:04:38.515 13:33:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.515 13:33:41 -- nvmf/common.sh@47 -- # : 0 00:04:38.515 13:33:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:38.515 13:33:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:38.515 13:33:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.515 13:33:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.515 13:33:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.515 13:33:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:38.515 13:33:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:38.515 13:33:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:38.515 13:33:41 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:38.515 13:33:41 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:38.515 13:33:41 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:38.515 13:33:41 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:38.515 13:33:41 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.515 13:33:41 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:38.515 13:33:41 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:38.515 13:33:41 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:38.515 13:33:41 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:38.515 13:33:41 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:38.515 13:33:41 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:38.515 13:33:41 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:38.515 13:33:41 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:38.515 13:33:41 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:38.515 13:33:41 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.515 13:33:41 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:38.515 INFO: JSON configuration test init 00:04:38.515 13:33:41 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:38.515 13:33:41 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:38.515 13:33:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.515 13:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.515 13:33:41 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:38.515 13:33:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.515 13:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.515 13:33:41 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:38.515 13:33:41 -- json_config/common.sh@9 -- # local app=target 00:04:38.515 13:33:41 -- json_config/common.sh@10 -- # shift 00:04:38.515 13:33:41 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.515 13:33:41 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.515 13:33:41 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.515 13:33:41 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.515 13:33:41 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.515 13:33:41 -- json_config/common.sh@22 -- # app_pid["$app"]=1017067 00:04:38.515 13:33:41 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:38.515 13:33:41 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.515 Waiting for target to run... 00:04:38.515 13:33:41 -- json_config/common.sh@25 -- # waitforlisten 1017067 /var/tmp/spdk_tgt.sock 00:04:38.515 13:33:41 -- common/autotest_common.sh@817 -- # '[' -z 1017067 ']' 00:04:38.515 13:33:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.515 13:33:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.515 13:33:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.515 13:33:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.515 13:33:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.775 [2024-04-18 13:33:41.321084] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:38.775 [2024-04-18 13:33:41.321193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017067 ] 00:04:38.775 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.342 [2024-04-18 13:33:41.923253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.342 [2024-04-18 13:33:42.030463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.601 13:33:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.601 13:33:42 -- common/autotest_common.sh@850 -- # return 0 00:04:39.601 13:33:42 -- json_config/common.sh@26 -- # echo '' 00:04:39.601 00:04:39.601 13:33:42 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:39.601 13:33:42 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:39.601 13:33:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.601 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 13:33:42 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:39.601 13:33:42 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:39.601 13:33:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.601 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 13:33:42 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:39.601 13:33:42 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:39.601 13:33:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:43.790 13:33:45 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:43.790 13:33:45 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:43.790 13:33:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.790 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:43.790 13:33:45 -- json_config/json_config.sh@45 -- # local ret=0 00:04:43.790 13:33:45 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:43.790 13:33:45 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:43.790 13:33:45 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:43.790 13:33:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.790 13:33:45 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:43.790 13:33:46 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:43.790 13:33:46 -- json_config/json_config.sh@48 -- # local get_types 00:04:43.790 13:33:46 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:43.790 13:33:46 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:43.790 13:33:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.790 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:04:43.790 13:33:46 -- json_config/json_config.sh@55 -- # return 0 00:04:43.790 13:33:46 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:43.790 13:33:46 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:43.791 13:33:46 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:43.791 13:33:46 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:43.791 13:33:46 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:43.791 13:33:46 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:43.791 13:33:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.791 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:04:43.791 13:33:46 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.791 13:33:46 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:04:43.791 13:33:46 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:04:43.791 13:33:46 -- json_config/json_config.sh@234 -- # nvmftestinit 00:04:43.791 13:33:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:04:43.791 13:33:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:43.791 13:33:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:04:43.791 13:33:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:04:43.791 13:33:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:04:43.791 13:33:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:43.791 13:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:43.791 13:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:43.791 13:33:46 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:04:43.791 13:33:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:04:43.791 13:33:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:04:43.791 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:04:46.321 13:33:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:04:46.321 13:33:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:04:46.321 13:33:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:04:46.321 13:33:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:04:46.321 13:33:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:04:46.321 13:33:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:04:46.321 13:33:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:04:46.321 13:33:48 -- nvmf/common.sh@295 -- # net_devs=() 00:04:46.321 13:33:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:04:46.321 13:33:48 -- nvmf/common.sh@296 -- # e810=() 00:04:46.321 13:33:48 -- nvmf/common.sh@296 -- # local -ga e810 00:04:46.321 13:33:48 -- nvmf/common.sh@297 -- # x722=() 00:04:46.321 13:33:48 -- nvmf/common.sh@297 -- # local -ga x722 00:04:46.321 13:33:48 -- nvmf/common.sh@298 -- # mlx=() 00:04:46.321 13:33:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:04:46.321 13:33:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:46.321 13:33:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:04:46.321 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:04:46.321 13:33:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.321 13:33:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:04:46.321 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:04:46.321 13:33:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.321 13:33:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.321 13:33:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.321 13:33:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:04:46.321 Found net devices under 0000:81:00.0: mlx_0_0 00:04:46.321 13:33:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.321 13:33:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.321 13:33:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:04:46.321 Found net devices under 0000:81:00.1: mlx_0_1 00:04:46.321 13:33:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.321 13:33:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:04:46.321 13:33:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:04:46.321 13:33:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:04:46.321 13:33:48 -- nvmf/common.sh@58 -- # uname 00:04:46.321 13:33:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:04:46.321 13:33:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:04:46.321 13:33:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:04:46.321 13:33:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:04:46.321 13:33:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:04:46.321 13:33:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:04:46.321 13:33:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:04:46.321 13:33:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:04:46.321 13:33:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:04:46.321 13:33:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:46.321 13:33:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:04:46.321 13:33:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.321 13:33:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:46.321 13:33:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:46.321 13:33:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.321 13:33:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:46.321 13:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:04:46.321 13:33:48 -- nvmf/common.sh@105 -- # continue 2 00:04:46.321 13:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.321 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:04:46.321 13:33:48 -- nvmf/common.sh@105 -- # continue 2 00:04:46.321 13:33:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:46.321 13:33:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:04:46.321 13:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.321 13:33:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:04:46.321 13:33:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:04:46.321 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.321 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:04:46.321 altname enp129s0f0np0 00:04:46.321 inet 192.168.100.8/24 scope global mlx_0_0 00:04:46.321 valid_lft forever preferred_lft forever 00:04:46.321 13:33:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:46.321 13:33:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:04:46.321 13:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.321 13:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.321 13:33:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:04:46.321 13:33:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:04:46.321 13:33:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:04:46.321 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.321 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:04:46.321 altname enp129s0f1np1 00:04:46.322 inet 192.168.100.9/24 scope global mlx_0_1 00:04:46.322 valid_lft forever preferred_lft forever 00:04:46.322 13:33:48 -- nvmf/common.sh@411 -- # return 0 00:04:46.322 13:33:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:04:46.322 13:33:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:46.322 13:33:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:04:46.322 13:33:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:04:46.322 13:33:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:04:46.322 13:33:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.322 13:33:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:46.322 13:33:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:46.322 13:33:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.322 13:33:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:46.322 13:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.322 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.322 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.322 13:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:04:46.322 13:33:48 -- nvmf/common.sh@105 -- # continue 2 00:04:46.322 13:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.322 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.322 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.322 13:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.322 13:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.322 13:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:04:46.322 13:33:48 -- nvmf/common.sh@105 -- # continue 2 00:04:46.322 13:33:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:46.322 13:33:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:04:46.322 13:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.322 13:33:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:46.322 13:33:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:04:46.322 13:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.322 13:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.322 13:33:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:04:46.322 192.168.100.9' 00:04:46.322 13:33:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:04:46.322 192.168.100.9' 00:04:46.322 13:33:48 -- nvmf/common.sh@446 -- # head -n 1 00:04:46.322 13:33:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:46.322 13:33:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:04:46.322 192.168.100.9' 00:04:46.322 13:33:48 -- nvmf/common.sh@447 -- # tail -n +2 00:04:46.322 13:33:48 -- nvmf/common.sh@447 -- # head -n 1 00:04:46.322 13:33:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:46.322 13:33:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:04:46.322 13:33:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:46.322 13:33:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:04:46.322 13:33:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:04:46.322 13:33:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:04:46.322 13:33:48 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:04:46.322 13:33:48 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.322 13:33:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.322 MallocForNvmf0 00:04:46.322 13:33:49 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.322 13:33:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.579 MallocForNvmf1 00:04:46.579 13:33:49 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:46.579 13:33:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:46.836 [2024-04-18 13:33:49.568657] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:46.836 [2024-04-18 13:33:49.600383] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f60170/0x208d140) succeed. 00:04:46.836 [2024-04-18 13:33:49.615132] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f62360/0x200d0c0) succeed. 00:04:47.094 13:33:49 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.094 13:33:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.351 13:33:49 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.351 13:33:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.609 13:33:50 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.609 13:33:50 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.868 13:33:50 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:47.868 13:33:50 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:48.126 [2024-04-18 13:33:50.786788] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:48.126 13:33:50 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:48.126 13:33:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.126 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:48.126 13:33:50 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:48.126 13:33:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.126 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:48.126 13:33:50 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:48.126 13:33:50 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.126 13:33:50 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.384 MallocBdevForConfigChangeCheck 00:04:48.384 13:33:51 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:48.384 13:33:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.384 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.384 13:33:51 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:48.384 13:33:51 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.951 13:33:51 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:48.951 INFO: shutting down applications... 00:04:48.951 13:33:51 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:48.951 13:33:51 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:48.951 13:33:51 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:48.951 13:33:51 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.853 Calling clear_iscsi_subsystem 00:04:50.853 Calling clear_nvmf_subsystem 00:04:50.853 Calling clear_nbd_subsystem 00:04:50.853 Calling clear_ublk_subsystem 00:04:50.853 Calling clear_vhost_blk_subsystem 00:04:50.853 Calling clear_vhost_scsi_subsystem 00:04:50.853 Calling clear_bdev_subsystem 00:04:50.853 13:33:53 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:50.853 13:33:53 -- json_config/json_config.sh@343 -- # count=100 00:04:50.853 13:33:53 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:50.853 13:33:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.853 13:33:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.853 13:33:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:51.474 13:33:53 -- json_config/json_config.sh@345 -- # break 00:04:51.474 13:33:53 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:51.474 13:33:53 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:51.474 13:33:53 -- json_config/common.sh@31 -- # local app=target 00:04:51.474 13:33:53 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.474 13:33:53 -- json_config/common.sh@35 -- # [[ -n 1017067 ]] 00:04:51.474 13:33:53 -- json_config/common.sh@38 -- # kill -SIGINT 1017067 00:04:51.474 13:33:53 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.474 13:33:53 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.474 13:33:53 -- json_config/common.sh@41 -- # kill -0 1017067 00:04:51.474 13:33:53 -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.732 13:33:54 -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.732 13:33:54 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.732 13:33:54 -- json_config/common.sh@41 -- # kill -0 1017067 00:04:51.732 13:33:54 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.732 13:33:54 -- json_config/common.sh@43 -- # break 00:04:51.732 13:33:54 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.732 13:33:54 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.732 SPDK target shutdown done 00:04:51.732 13:33:54 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:51.732 INFO: relaunching applications... 00:04:51.732 13:33:54 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.732 13:33:54 -- json_config/common.sh@9 -- # local app=target 00:04:51.732 13:33:54 -- json_config/common.sh@10 -- # shift 00:04:51.732 13:33:54 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.732 13:33:54 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.732 13:33:54 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.732 13:33:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.732 13:33:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.732 13:33:54 -- json_config/common.sh@22 -- # app_pid["$app"]=1020281 00:04:51.732 13:33:54 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.732 13:33:54 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.732 Waiting for target to run... 00:04:51.732 13:33:54 -- json_config/common.sh@25 -- # waitforlisten 1020281 /var/tmp/spdk_tgt.sock 00:04:51.732 13:33:54 -- common/autotest_common.sh@817 -- # '[' -z 1020281 ']' 00:04:51.732 13:33:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.732 13:33:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.732 13:33:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.732 13:33:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.732 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.732 [2024-04-18 13:33:54.529605] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:51.732 [2024-04-18 13:33:54.529731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020281 ] 00:04:51.991 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.557 [2024-04-18 13:33:55.202126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.557 [2024-04-18 13:33:55.309109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.840 [2024-04-18 13:33:58.379204] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b663e0/0x1aeb5c0) succeed. 00:04:55.840 [2024-04-18 13:33:58.393435] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b685d0/0x1b6b600) succeed. 00:04:55.840 [2024-04-18 13:33:58.452521] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:56.771 13:33:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.771 13:33:59 -- common/autotest_common.sh@850 -- # return 0 00:04:56.771 13:33:59 -- json_config/common.sh@26 -- # echo '' 00:04:56.771 00:04:56.771 13:33:59 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:56.771 13:33:59 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:56.771 INFO: Checking if target configuration is the same... 00:04:56.772 13:33:59 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.772 13:33:59 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:56.772 13:33:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.772 + '[' 2 -ne 2 ']' 00:04:56.772 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.772 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:56.772 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:56.772 +++ basename /dev/fd/62 00:04:56.772 ++ mktemp /tmp/62.XXX 00:04:56.772 + tmp_file_1=/tmp/62.KAT 00:04:56.772 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.772 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.772 + tmp_file_2=/tmp/spdk_tgt_config.json.DrM 00:04:56.772 + ret=0 00:04:56.772 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.029 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.029 + diff -u /tmp/62.KAT /tmp/spdk_tgt_config.json.DrM 00:04:57.029 + echo 'INFO: JSON config files are the same' 00:04:57.029 INFO: JSON config files are the same 00:04:57.029 + rm /tmp/62.KAT /tmp/spdk_tgt_config.json.DrM 00:04:57.029 + exit 0 00:04:57.029 13:33:59 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:57.029 13:33:59 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:57.029 INFO: changing configuration and checking if this can be detected... 00:04:57.029 13:33:59 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.029 13:33:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.594 13:34:00 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.594 13:34:00 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:57.594 13:34:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.594 + '[' 2 -ne 2 ']' 00:04:57.594 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.594 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:57.594 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:57.594 +++ basename /dev/fd/62 00:04:57.594 ++ mktemp /tmp/62.XXX 00:04:57.594 + tmp_file_1=/tmp/62.VFr 00:04:57.594 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.594 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.594 + tmp_file_2=/tmp/spdk_tgt_config.json.S1L 00:04:57.594 + ret=0 00:04:57.594 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.158 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.158 + diff -u /tmp/62.VFr /tmp/spdk_tgt_config.json.S1L 00:04:58.416 + ret=1 00:04:58.416 + echo '=== Start of file: /tmp/62.VFr ===' 00:04:58.416 + cat /tmp/62.VFr 00:04:58.416 + echo '=== End of file: /tmp/62.VFr ===' 00:04:58.416 + echo '' 00:04:58.416 + echo '=== Start of file: /tmp/spdk_tgt_config.json.S1L ===' 00:04:58.416 + cat /tmp/spdk_tgt_config.json.S1L 00:04:58.416 + echo '=== End of file: /tmp/spdk_tgt_config.json.S1L ===' 00:04:58.416 + echo '' 00:04:58.416 + rm /tmp/62.VFr /tmp/spdk_tgt_config.json.S1L 00:04:58.416 + exit 1 00:04:58.416 13:34:00 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:58.416 INFO: configuration change detected. 00:04:58.416 13:34:00 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:58.416 13:34:00 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:58.416 13:34:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:58.416 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.416 13:34:00 -- json_config/json_config.sh@307 -- # local ret=0 00:04:58.416 13:34:00 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:58.416 13:34:00 -- json_config/json_config.sh@317 -- # [[ -n 1020281 ]] 00:04:58.416 13:34:00 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:58.416 13:34:00 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.416 13:34:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:58.416 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.416 13:34:00 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:58.416 13:34:00 -- json_config/json_config.sh@193 -- # uname -s 00:04:58.416 13:34:00 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:58.416 13:34:00 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:58.416 13:34:00 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:58.416 13:34:00 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.416 13:34:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:58.416 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.416 13:34:01 -- json_config/json_config.sh@323 -- # killprocess 1020281 00:04:58.416 13:34:01 -- common/autotest_common.sh@936 -- # '[' -z 1020281 ']' 00:04:58.416 13:34:01 -- common/autotest_common.sh@940 -- # kill -0 1020281 00:04:58.416 13:34:01 -- common/autotest_common.sh@941 -- # uname 00:04:58.416 13:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.416 13:34:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1020281 00:04:58.416 13:34:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.416 13:34:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.416 13:34:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1020281' 00:04:58.416 killing process with pid 1020281 00:04:58.416 13:34:01 -- common/autotest_common.sh@955 -- # kill 1020281 00:04:58.416 13:34:01 -- common/autotest_common.sh@960 -- # wait 1020281 00:05:00.314 13:34:02 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.314 13:34:02 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:00.314 13:34:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:00.314 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 13:34:02 -- json_config/json_config.sh@328 -- # return 0 00:05:00.314 13:34:02 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:00.314 INFO: Success 00:05:00.314 13:34:02 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:00.314 13:34:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:00.314 13:34:02 -- nvmf/common.sh@117 -- # sync 00:05:00.314 13:34:02 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:00.314 13:34:02 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:00.314 13:34:02 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:05:00.314 13:34:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:00.314 13:34:02 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:05:00.314 00:05:00.314 real 0m21.590s 00:05:00.314 user 0m25.824s 00:05:00.314 sys 0m4.645s 00:05:00.314 13:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.314 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 ************************************ 00:05:00.314 END TEST json_config 00:05:00.314 ************************************ 00:05:00.314 13:34:02 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.314 13:34:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.314 13:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.314 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 ************************************ 00:05:00.314 START TEST json_config_extra_key 00:05:00.314 ************************************ 00:05:00.314 13:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.314 13:34:02 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.314 13:34:02 -- nvmf/common.sh@7 -- # uname -s 00:05:00.314 13:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.314 13:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.314 13:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.314 13:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.314 13:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.314 13:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.314 13:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.314 13:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.314 13:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.314 13:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.314 13:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:05:00.314 13:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:05:00.314 13:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.314 13:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.315 13:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.315 13:34:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.315 13:34:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:00.315 13:34:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.315 13:34:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.315 13:34:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.315 13:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.315 13:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.315 13:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.315 13:34:02 -- paths/export.sh@5 -- # export PATH 00:05:00.315 13:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.315 13:34:02 -- nvmf/common.sh@47 -- # : 0 00:05:00.315 13:34:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:00.315 13:34:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:00.315 13:34:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.315 13:34:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.315 13:34:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.315 13:34:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:00.315 13:34:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:00.315 13:34:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:00.315 INFO: launching applications... 00:05:00.315 13:34:02 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:00.315 13:34:02 -- json_config/common.sh@9 -- # local app=target 00:05:00.315 13:34:02 -- json_config/common.sh@10 -- # shift 00:05:00.315 13:34:02 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.315 13:34:02 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.315 13:34:02 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.315 13:34:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.315 13:34:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.315 13:34:03 -- json_config/common.sh@22 -- # app_pid["$app"]=1021453 00:05:00.315 13:34:03 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:00.315 13:34:03 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.315 Waiting for target to run... 00:05:00.315 13:34:03 -- json_config/common.sh@25 -- # waitforlisten 1021453 /var/tmp/spdk_tgt.sock 00:05:00.315 13:34:03 -- common/autotest_common.sh@817 -- # '[' -z 1021453 ']' 00:05:00.315 13:34:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.315 13:34:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.315 13:34:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.315 13:34:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.315 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.315 [2024-04-18 13:34:03.061807] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:00.315 [2024-04-18 13:34:03.061914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021453 ] 00:05:00.315 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.880 [2024-04-18 13:34:03.460813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.880 [2024-04-18 13:34:03.552990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.445 13:34:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.445 13:34:04 -- common/autotest_common.sh@850 -- # return 0 00:05:01.445 13:34:04 -- json_config/common.sh@26 -- # echo '' 00:05:01.445 00:05:01.445 13:34:04 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:01.445 INFO: shutting down applications... 00:05:01.445 13:34:04 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:01.445 13:34:04 -- json_config/common.sh@31 -- # local app=target 00:05:01.445 13:34:04 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:01.445 13:34:04 -- json_config/common.sh@35 -- # [[ -n 1021453 ]] 00:05:01.445 13:34:04 -- json_config/common.sh@38 -- # kill -SIGINT 1021453 00:05:01.445 13:34:04 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:01.445 13:34:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.445 13:34:04 -- json_config/common.sh@41 -- # kill -0 1021453 00:05:01.445 13:34:04 -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.011 13:34:04 -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.011 13:34:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.011 13:34:04 -- json_config/common.sh@41 -- # kill -0 1021453 00:05:02.011 13:34:04 -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.577 13:34:05 -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.577 13:34:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.577 13:34:05 -- json_config/common.sh@41 -- # kill -0 1021453 00:05:02.577 13:34:05 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.577 13:34:05 -- json_config/common.sh@43 -- # break 00:05:02.577 13:34:05 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.577 13:34:05 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.577 SPDK target shutdown done 00:05:02.577 13:34:05 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:02.577 Success 00:05:02.577 00:05:02.577 real 0m2.178s 00:05:02.577 user 0m1.768s 00:05:02.577 sys 0m0.501s 00:05:02.577 13:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.577 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.577 ************************************ 00:05:02.577 END TEST json_config_extra_key 00:05:02.577 ************************************ 00:05:02.577 13:34:05 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.577 13:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.577 13:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.577 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.577 ************************************ 00:05:02.577 START TEST alias_rpc 00:05:02.577 ************************************ 00:05:02.577 13:34:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.577 * Looking for test storage... 00:05:02.577 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:02.577 13:34:05 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.577 13:34:05 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1021782 00:05:02.577 13:34:05 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.577 13:34:05 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1021782 00:05:02.577 13:34:05 -- common/autotest_common.sh@817 -- # '[' -z 1021782 ']' 00:05:02.577 13:34:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.577 13:34:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.577 13:34:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.577 13:34:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.577 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.835 [2024-04-18 13:34:05.382721] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:02.835 [2024-04-18 13:34:05.382841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021782 ] 00:05:02.835 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.835 [2024-04-18 13:34:05.467986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.835 [2024-04-18 13:34:05.592488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.093 13:34:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.093 13:34:05 -- common/autotest_common.sh@850 -- # return 0 00:05:03.093 13:34:05 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:03.658 13:34:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1021782 00:05:03.658 13:34:06 -- common/autotest_common.sh@936 -- # '[' -z 1021782 ']' 00:05:03.658 13:34:06 -- common/autotest_common.sh@940 -- # kill -0 1021782 00:05:03.658 13:34:06 -- common/autotest_common.sh@941 -- # uname 00:05:03.658 13:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.658 13:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1021782 00:05:03.658 13:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.658 13:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.658 13:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1021782' 00:05:03.658 killing process with pid 1021782 00:05:03.658 13:34:06 -- common/autotest_common.sh@955 -- # kill 1021782 00:05:03.658 13:34:06 -- common/autotest_common.sh@960 -- # wait 1021782 00:05:03.916 00:05:03.916 real 0m1.441s 00:05:03.916 user 0m1.751s 00:05:03.916 sys 0m0.483s 00:05:03.916 13:34:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.916 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:03.916 ************************************ 00:05:03.916 END TEST alias_rpc 00:05:03.916 ************************************ 00:05:04.175 13:34:06 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:04.175 13:34:06 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:04.175 13:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.175 13:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.175 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.175 ************************************ 00:05:04.175 START TEST spdkcli_tcp 00:05:04.175 ************************************ 00:05:04.175 13:34:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:04.175 * Looking for test storage... 00:05:04.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:04.175 13:34:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:04.175 13:34:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.175 13:34:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:04.175 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1021974 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.175 13:34:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 1021974 00:05:04.175 13:34:06 -- common/autotest_common.sh@817 -- # '[' -z 1021974 ']' 00:05:04.175 13:34:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.175 13:34:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.175 13:34:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.175 13:34:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.175 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.175 [2024-04-18 13:34:06.964233] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:04.175 [2024-04-18 13:34:06.964327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021974 ] 00:05:04.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.433 [2024-04-18 13:34:07.042852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.433 [2024-04-18 13:34:07.165370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.433 [2024-04-18 13:34:07.165376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.691 13:34:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.691 13:34:07 -- common/autotest_common.sh@850 -- # return 0 00:05:04.691 13:34:07 -- spdkcli/tcp.sh@31 -- # socat_pid=1022109 00:05:04.691 13:34:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.691 13:34:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.257 [ 00:05:05.257 "bdev_malloc_delete", 00:05:05.257 "bdev_malloc_create", 00:05:05.257 "bdev_null_resize", 00:05:05.257 "bdev_null_delete", 00:05:05.257 "bdev_null_create", 00:05:05.257 "bdev_nvme_cuse_unregister", 00:05:05.257 "bdev_nvme_cuse_register", 00:05:05.257 "bdev_opal_new_user", 00:05:05.257 "bdev_opal_set_lock_state", 00:05:05.257 "bdev_opal_delete", 00:05:05.257 "bdev_opal_get_info", 00:05:05.257 "bdev_opal_create", 00:05:05.257 "bdev_nvme_opal_revert", 00:05:05.257 "bdev_nvme_opal_init", 00:05:05.257 "bdev_nvme_send_cmd", 00:05:05.257 "bdev_nvme_get_path_iostat", 00:05:05.257 "bdev_nvme_get_mdns_discovery_info", 00:05:05.257 "bdev_nvme_stop_mdns_discovery", 00:05:05.257 "bdev_nvme_start_mdns_discovery", 00:05:05.257 "bdev_nvme_set_multipath_policy", 00:05:05.257 "bdev_nvme_set_preferred_path", 00:05:05.257 "bdev_nvme_get_io_paths", 00:05:05.257 "bdev_nvme_remove_error_injection", 00:05:05.257 "bdev_nvme_add_error_injection", 00:05:05.257 "bdev_nvme_get_discovery_info", 00:05:05.257 "bdev_nvme_stop_discovery", 00:05:05.257 "bdev_nvme_start_discovery", 00:05:05.257 "bdev_nvme_get_controller_health_info", 00:05:05.257 "bdev_nvme_disable_controller", 00:05:05.257 "bdev_nvme_enable_controller", 00:05:05.257 "bdev_nvme_reset_controller", 00:05:05.257 "bdev_nvme_get_transport_statistics", 00:05:05.257 "bdev_nvme_apply_firmware", 00:05:05.257 "bdev_nvme_detach_controller", 00:05:05.257 "bdev_nvme_get_controllers", 00:05:05.257 "bdev_nvme_attach_controller", 00:05:05.257 "bdev_nvme_set_hotplug", 00:05:05.257 "bdev_nvme_set_options", 00:05:05.257 "bdev_passthru_delete", 00:05:05.257 "bdev_passthru_create", 00:05:05.257 "bdev_lvol_grow_lvstore", 00:05:05.257 "bdev_lvol_get_lvols", 00:05:05.257 "bdev_lvol_get_lvstores", 00:05:05.257 "bdev_lvol_delete", 00:05:05.257 "bdev_lvol_set_read_only", 00:05:05.257 "bdev_lvol_resize", 00:05:05.257 "bdev_lvol_decouple_parent", 00:05:05.257 "bdev_lvol_inflate", 00:05:05.257 "bdev_lvol_rename", 00:05:05.257 "bdev_lvol_clone_bdev", 00:05:05.257 "bdev_lvol_clone", 00:05:05.257 "bdev_lvol_snapshot", 00:05:05.257 "bdev_lvol_create", 00:05:05.257 "bdev_lvol_delete_lvstore", 00:05:05.257 "bdev_lvol_rename_lvstore", 00:05:05.257 "bdev_lvol_create_lvstore", 00:05:05.257 "bdev_raid_set_options", 00:05:05.257 "bdev_raid_remove_base_bdev", 00:05:05.257 "bdev_raid_add_base_bdev", 00:05:05.257 "bdev_raid_delete", 00:05:05.257 "bdev_raid_create", 00:05:05.257 "bdev_raid_get_bdevs", 00:05:05.257 "bdev_error_inject_error", 00:05:05.257 "bdev_error_delete", 00:05:05.257 "bdev_error_create", 00:05:05.257 "bdev_split_delete", 00:05:05.257 "bdev_split_create", 00:05:05.257 "bdev_delay_delete", 00:05:05.257 "bdev_delay_create", 00:05:05.257 "bdev_delay_update_latency", 00:05:05.257 "bdev_zone_block_delete", 00:05:05.257 "bdev_zone_block_create", 00:05:05.257 "blobfs_create", 00:05:05.257 "blobfs_detect", 00:05:05.257 "blobfs_set_cache_size", 00:05:05.257 "bdev_aio_delete", 00:05:05.257 "bdev_aio_rescan", 00:05:05.257 "bdev_aio_create", 00:05:05.257 "bdev_ftl_set_property", 00:05:05.257 "bdev_ftl_get_properties", 00:05:05.257 "bdev_ftl_get_stats", 00:05:05.257 "bdev_ftl_unmap", 00:05:05.257 "bdev_ftl_unload", 00:05:05.257 "bdev_ftl_delete", 00:05:05.257 "bdev_ftl_load", 00:05:05.257 "bdev_ftl_create", 00:05:05.257 "bdev_virtio_attach_controller", 00:05:05.257 "bdev_virtio_scsi_get_devices", 00:05:05.257 "bdev_virtio_detach_controller", 00:05:05.257 "bdev_virtio_blk_set_hotplug", 00:05:05.257 "bdev_iscsi_delete", 00:05:05.257 "bdev_iscsi_create", 00:05:05.257 "bdev_iscsi_set_options", 00:05:05.257 "accel_error_inject_error", 00:05:05.257 "ioat_scan_accel_module", 00:05:05.257 "dsa_scan_accel_module", 00:05:05.257 "iaa_scan_accel_module", 00:05:05.257 "keyring_file_remove_key", 00:05:05.257 "keyring_file_add_key", 00:05:05.257 "iscsi_set_options", 00:05:05.257 "iscsi_get_auth_groups", 00:05:05.257 "iscsi_auth_group_remove_secret", 00:05:05.257 "iscsi_auth_group_add_secret", 00:05:05.257 "iscsi_delete_auth_group", 00:05:05.257 "iscsi_create_auth_group", 00:05:05.257 "iscsi_set_discovery_auth", 00:05:05.257 "iscsi_get_options", 00:05:05.257 "iscsi_target_node_request_logout", 00:05:05.257 "iscsi_target_node_set_redirect", 00:05:05.257 "iscsi_target_node_set_auth", 00:05:05.257 "iscsi_target_node_add_lun", 00:05:05.257 "iscsi_get_stats", 00:05:05.258 "iscsi_get_connections", 00:05:05.258 "iscsi_portal_group_set_auth", 00:05:05.258 "iscsi_start_portal_group", 00:05:05.258 "iscsi_delete_portal_group", 00:05:05.258 "iscsi_create_portal_group", 00:05:05.258 "iscsi_get_portal_groups", 00:05:05.258 "iscsi_delete_target_node", 00:05:05.258 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.258 "iscsi_target_node_add_pg_ig_maps", 00:05:05.258 "iscsi_create_target_node", 00:05:05.258 "iscsi_get_target_nodes", 00:05:05.258 "iscsi_delete_initiator_group", 00:05:05.258 "iscsi_initiator_group_remove_initiators", 00:05:05.258 "iscsi_initiator_group_add_initiators", 00:05:05.258 "iscsi_create_initiator_group", 00:05:05.258 "iscsi_get_initiator_groups", 00:05:05.258 "nvmf_set_crdt", 00:05:05.258 "nvmf_set_config", 00:05:05.258 "nvmf_set_max_subsystems", 00:05:05.258 "nvmf_subsystem_get_listeners", 00:05:05.258 "nvmf_subsystem_get_qpairs", 00:05:05.258 "nvmf_subsystem_get_controllers", 00:05:05.258 "nvmf_get_stats", 00:05:05.258 "nvmf_get_transports", 00:05:05.258 "nvmf_create_transport", 00:05:05.258 "nvmf_get_targets", 00:05:05.258 "nvmf_delete_target", 00:05:05.258 "nvmf_create_target", 00:05:05.258 "nvmf_subsystem_allow_any_host", 00:05:05.258 "nvmf_subsystem_remove_host", 00:05:05.258 "nvmf_subsystem_add_host", 00:05:05.258 "nvmf_ns_remove_host", 00:05:05.258 "nvmf_ns_add_host", 00:05:05.258 "nvmf_subsystem_remove_ns", 00:05:05.258 "nvmf_subsystem_add_ns", 00:05:05.258 "nvmf_subsystem_listener_set_ana_state", 00:05:05.258 "nvmf_discovery_get_referrals", 00:05:05.258 "nvmf_discovery_remove_referral", 00:05:05.258 "nvmf_discovery_add_referral", 00:05:05.258 "nvmf_subsystem_remove_listener", 00:05:05.258 "nvmf_subsystem_add_listener", 00:05:05.258 "nvmf_delete_subsystem", 00:05:05.258 "nvmf_create_subsystem", 00:05:05.258 "nvmf_get_subsystems", 00:05:05.258 "env_dpdk_get_mem_stats", 00:05:05.258 "nbd_get_disks", 00:05:05.258 "nbd_stop_disk", 00:05:05.258 "nbd_start_disk", 00:05:05.258 "ublk_recover_disk", 00:05:05.258 "ublk_get_disks", 00:05:05.258 "ublk_stop_disk", 00:05:05.258 "ublk_start_disk", 00:05:05.258 "ublk_destroy_target", 00:05:05.258 "ublk_create_target", 00:05:05.258 "virtio_blk_create_transport", 00:05:05.258 "virtio_blk_get_transports", 00:05:05.258 "vhost_controller_set_coalescing", 00:05:05.258 "vhost_get_controllers", 00:05:05.258 "vhost_delete_controller", 00:05:05.258 "vhost_create_blk_controller", 00:05:05.258 "vhost_scsi_controller_remove_target", 00:05:05.258 "vhost_scsi_controller_add_target", 00:05:05.258 "vhost_start_scsi_controller", 00:05:05.258 "vhost_create_scsi_controller", 00:05:05.258 "thread_set_cpumask", 00:05:05.258 "framework_get_scheduler", 00:05:05.258 "framework_set_scheduler", 00:05:05.258 "framework_get_reactors", 00:05:05.258 "thread_get_io_channels", 00:05:05.258 "thread_get_pollers", 00:05:05.258 "thread_get_stats", 00:05:05.258 "framework_monitor_context_switch", 00:05:05.258 "spdk_kill_instance", 00:05:05.258 "log_enable_timestamps", 00:05:05.258 "log_get_flags", 00:05:05.258 "log_clear_flag", 00:05:05.258 "log_set_flag", 00:05:05.258 "log_get_level", 00:05:05.258 "log_set_level", 00:05:05.258 "log_get_print_level", 00:05:05.258 "log_set_print_level", 00:05:05.258 "framework_enable_cpumask_locks", 00:05:05.258 "framework_disable_cpumask_locks", 00:05:05.258 "framework_wait_init", 00:05:05.258 "framework_start_init", 00:05:05.258 "scsi_get_devices", 00:05:05.258 "bdev_get_histogram", 00:05:05.258 "bdev_enable_histogram", 00:05:05.258 "bdev_set_qos_limit", 00:05:05.258 "bdev_set_qd_sampling_period", 00:05:05.258 "bdev_get_bdevs", 00:05:05.258 "bdev_reset_iostat", 00:05:05.258 "bdev_get_iostat", 00:05:05.258 "bdev_examine", 00:05:05.258 "bdev_wait_for_examine", 00:05:05.258 "bdev_set_options", 00:05:05.258 "notify_get_notifications", 00:05:05.258 "notify_get_types", 00:05:05.258 "accel_get_stats", 00:05:05.258 "accel_set_options", 00:05:05.258 "accel_set_driver", 00:05:05.258 "accel_crypto_key_destroy", 00:05:05.258 "accel_crypto_keys_get", 00:05:05.258 "accel_crypto_key_create", 00:05:05.258 "accel_assign_opc", 00:05:05.258 "accel_get_module_info", 00:05:05.258 "accel_get_opc_assignments", 00:05:05.258 "vmd_rescan", 00:05:05.258 "vmd_remove_device", 00:05:05.258 "vmd_enable", 00:05:05.258 "sock_set_default_impl", 00:05:05.258 "sock_impl_set_options", 00:05:05.258 "sock_impl_get_options", 00:05:05.258 "iobuf_get_stats", 00:05:05.258 "iobuf_set_options", 00:05:05.258 "framework_get_pci_devices", 00:05:05.258 "framework_get_config", 00:05:05.258 "framework_get_subsystems", 00:05:05.258 "trace_get_info", 00:05:05.258 "trace_get_tpoint_group_mask", 00:05:05.258 "trace_disable_tpoint_group", 00:05:05.258 "trace_enable_tpoint_group", 00:05:05.258 "trace_clear_tpoint_mask", 00:05:05.258 "trace_set_tpoint_mask", 00:05:05.258 "keyring_get_keys", 00:05:05.258 "spdk_get_version", 00:05:05.258 "rpc_get_methods" 00:05:05.258 ] 00:05:05.258 13:34:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.258 13:34:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:05.258 13:34:07 -- common/autotest_common.sh@10 -- # set +x 00:05:05.258 13:34:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.258 13:34:07 -- spdkcli/tcp.sh@38 -- # killprocess 1021974 00:05:05.258 13:34:07 -- common/autotest_common.sh@936 -- # '[' -z 1021974 ']' 00:05:05.258 13:34:07 -- common/autotest_common.sh@940 -- # kill -0 1021974 00:05:05.258 13:34:07 -- common/autotest_common.sh@941 -- # uname 00:05:05.258 13:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.258 13:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1021974 00:05:05.258 13:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.258 13:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.258 13:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1021974' 00:05:05.258 killing process with pid 1021974 00:05:05.258 13:34:07 -- common/autotest_common.sh@955 -- # kill 1021974 00:05:05.258 13:34:07 -- common/autotest_common.sh@960 -- # wait 1021974 00:05:05.824 00:05:05.824 real 0m1.478s 00:05:05.824 user 0m2.664s 00:05:05.824 sys 0m0.496s 00:05:05.824 13:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.824 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 ************************************ 00:05:05.824 END TEST spdkcli_tcp 00:05:05.824 ************************************ 00:05:05.824 13:34:08 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.824 13:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.824 13:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.824 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 ************************************ 00:05:05.824 START TEST dpdk_mem_utility 00:05:05.824 ************************************ 00:05:05.824 13:34:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.824 * Looking for test storage... 00:05:05.824 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:05.824 13:34:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.824 13:34:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1022315 00:05:05.824 13:34:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.824 13:34:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1022315 00:05:05.824 13:34:08 -- common/autotest_common.sh@817 -- # '[' -z 1022315 ']' 00:05:05.824 13:34:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.824 13:34:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.824 13:34:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.824 13:34:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.824 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 [2024-04-18 13:34:08.588790] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:05.824 [2024-04-18 13:34:08.588900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022315 ] 00:05:06.090 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.090 [2024-04-18 13:34:08.677201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.090 [2024-04-18 13:34:08.799499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.401 13:34:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.401 13:34:09 -- common/autotest_common.sh@850 -- # return 0 00:05:06.401 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:06.401 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:06.401 13:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.401 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.401 { 00:05:06.401 "filename": "/tmp/spdk_mem_dump.txt" 00:05:06.401 } 00:05:06.401 13:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.401 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:06.401 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:06.401 1 heaps totaling size 814.000000 MiB 00:05:06.401 size: 814.000000 MiB heap id: 0 00:05:06.401 end heaps---------- 00:05:06.401 8 mempools totaling size 598.116089 MiB 00:05:06.401 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:06.401 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:06.401 size: 84.521057 MiB name: bdev_io_1022315 00:05:06.401 size: 51.011292 MiB name: evtpool_1022315 00:05:06.401 size: 50.003479 MiB name: msgpool_1022315 00:05:06.401 size: 21.763794 MiB name: PDU_Pool 00:05:06.401 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:06.401 size: 0.026123 MiB name: Session_Pool 00:05:06.401 end mempools------- 00:05:06.401 6 memzones totaling size 4.142822 MiB 00:05:06.401 size: 1.000366 MiB name: RG_ring_0_1022315 00:05:06.401 size: 1.000366 MiB name: RG_ring_1_1022315 00:05:06.401 size: 1.000366 MiB name: RG_ring_4_1022315 00:05:06.401 size: 1.000366 MiB name: RG_ring_5_1022315 00:05:06.401 size: 0.125366 MiB name: RG_ring_2_1022315 00:05:06.401 size: 0.015991 MiB name: RG_ring_3_1022315 00:05:06.401 end memzones------- 00:05:06.401 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:06.402 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:06.402 list of free elements. size: 12.519348 MiB 00:05:06.402 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:06.402 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:06.402 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:06.402 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:06.402 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:06.402 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:06.402 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:06.402 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:06.402 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:06.402 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:06.402 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:06.402 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:06.402 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:06.402 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:06.402 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:06.402 list of standard malloc elements. size: 199.218079 MiB 00:05:06.402 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:06.402 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:06.402 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:06.402 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:06.402 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:06.402 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:06.402 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:06.402 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:06.402 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:06.402 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:06.402 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:06.402 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:06.402 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:06.402 list of memzone associated elements. size: 602.262573 MiB 00:05:06.402 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:06.402 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:06.402 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:06.402 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:06.402 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:06.402 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1022315_0 00:05:06.402 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:06.402 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1022315_0 00:05:06.402 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:06.402 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1022315_0 00:05:06.402 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:06.402 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:06.402 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:06.402 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:06.402 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:06.402 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1022315 00:05:06.402 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:06.402 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1022315 00:05:06.402 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:06.402 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1022315 00:05:06.402 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:06.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:06.402 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:06.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:06.402 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:06.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:06.402 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:06.402 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:06.402 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:06.402 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1022315 00:05:06.402 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:06.402 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1022315 00:05:06.402 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:06.402 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1022315 00:05:06.402 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:06.402 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1022315 00:05:06.402 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:06.402 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1022315 00:05:06.402 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:06.402 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:06.402 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:06.402 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:06.402 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:06.402 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:06.402 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:06.402 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1022315 00:05:06.402 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:06.402 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:06.402 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:06.402 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:06.402 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:06.402 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1022315 00:05:06.402 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:06.402 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:06.402 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:06.402 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1022315 00:05:06.402 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:06.402 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1022315 00:05:06.402 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:06.402 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:06.659 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:06.659 13:34:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1022315 00:05:06.659 13:34:09 -- common/autotest_common.sh@936 -- # '[' -z 1022315 ']' 00:05:06.659 13:34:09 -- common/autotest_common.sh@940 -- # kill -0 1022315 00:05:06.659 13:34:09 -- common/autotest_common.sh@941 -- # uname 00:05:06.659 13:34:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.659 13:34:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1022315 00:05:06.659 13:34:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.659 13:34:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.659 13:34:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1022315' 00:05:06.659 killing process with pid 1022315 00:05:06.659 13:34:09 -- common/autotest_common.sh@955 -- # kill 1022315 00:05:06.659 13:34:09 -- common/autotest_common.sh@960 -- # wait 1022315 00:05:06.917 00:05:06.917 real 0m1.242s 00:05:06.917 user 0m1.276s 00:05:06.917 sys 0m0.464s 00:05:06.917 13:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.917 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.917 ************************************ 00:05:06.917 END TEST dpdk_mem_utility 00:05:06.917 ************************************ 00:05:07.174 13:34:09 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:07.175 13:34:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.175 13:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.175 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.175 ************************************ 00:05:07.175 START TEST event 00:05:07.175 ************************************ 00:05:07.175 13:34:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:07.175 * Looking for test storage... 00:05:07.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:07.175 13:34:09 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:07.175 13:34:09 -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.175 13:34:09 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.175 13:34:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:07.175 13:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.175 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.433 ************************************ 00:05:07.433 START TEST event_perf 00:05:07.433 ************************************ 00:05:07.433 13:34:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.433 Running I/O for 1 seconds...[2024-04-18 13:34:10.039950] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:07.433 [2024-04-18 13:34:10.040018] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022520 ] 00:05:07.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.433 [2024-04-18 13:34:10.118399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.691 [2024-04-18 13:34:10.242908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.691 [2024-04-18 13:34:10.242974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.691 [2024-04-18 13:34:10.243003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.691 [2024-04-18 13:34:10.243007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.624 Running I/O for 1 seconds... 00:05:08.624 lcore 0: 214850 00:05:08.624 lcore 1: 214849 00:05:08.624 lcore 2: 214849 00:05:08.624 lcore 3: 214849 00:05:08.624 done. 00:05:08.624 00:05:08.624 real 0m1.348s 00:05:08.624 user 0m4.246s 00:05:08.624 sys 0m0.096s 00:05:08.624 13:34:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.624 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.624 ************************************ 00:05:08.624 END TEST event_perf 00:05:08.624 ************************************ 00:05:08.624 13:34:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:08.624 13:34:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:08.624 13:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.624 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.882 ************************************ 00:05:08.882 START TEST event_reactor 00:05:08.882 ************************************ 00:05:08.882 13:34:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:08.882 [2024-04-18 13:34:11.507897] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:08.882 [2024-04-18 13:34:11.507997] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022694 ] 00:05:08.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.882 [2024-04-18 13:34:11.592149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.140 [2024-04-18 13:34:11.715414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.072 test_start 00:05:10.072 oneshot 00:05:10.073 tick 100 00:05:10.073 tick 100 00:05:10.073 tick 250 00:05:10.073 tick 100 00:05:10.073 tick 100 00:05:10.073 tick 100 00:05:10.073 tick 250 00:05:10.073 tick 500 00:05:10.073 tick 100 00:05:10.073 tick 100 00:05:10.073 tick 250 00:05:10.073 tick 100 00:05:10.073 tick 100 00:05:10.073 test_end 00:05:10.073 00:05:10.073 real 0m1.352s 00:05:10.073 user 0m1.251s 00:05:10.073 sys 0m0.095s 00:05:10.073 13:34:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.073 13:34:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.073 ************************************ 00:05:10.073 END TEST event_reactor 00:05:10.073 ************************************ 00:05:10.073 13:34:12 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.073 13:34:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:10.073 13:34:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.073 13:34:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.331 ************************************ 00:05:10.331 START TEST event_reactor_perf 00:05:10.331 ************************************ 00:05:10.331 13:34:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.331 [2024-04-18 13:34:12.980544] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:10.331 [2024-04-18 13:34:12.980608] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022859 ] 00:05:10.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.331 [2024-04-18 13:34:13.058565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.589 [2024-04-18 13:34:13.180960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.540 test_start 00:05:11.540 test_end 00:05:11.540 Performance: 354045 events per second 00:05:11.540 00:05:11.540 real 0m1.342s 00:05:11.540 user 0m1.246s 00:05:11.540 sys 0m0.090s 00:05:11.540 13:34:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.540 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:11.540 ************************************ 00:05:11.540 END TEST event_reactor_perf 00:05:11.540 ************************************ 00:05:11.540 13:34:14 -- event/event.sh@49 -- # uname -s 00:05:11.540 13:34:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.540 13:34:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.540 13:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.540 13:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.540 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:11.798 ************************************ 00:05:11.798 START TEST event_scheduler 00:05:11.798 ************************************ 00:05:11.798 13:34:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.798 * Looking for test storage... 00:05:11.798 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:11.798 13:34:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.798 13:34:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1023166 00:05:11.798 13:34:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.798 13:34:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.798 13:34:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 1023166 00:05:11.798 13:34:14 -- common/autotest_common.sh@817 -- # '[' -z 1023166 ']' 00:05:11.798 13:34:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.798 13:34:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.798 13:34:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.798 13:34:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.798 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:11.798 [2024-04-18 13:34:14.548521] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:11.799 [2024-04-18 13:34:14.548616] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023166 ] 00:05:11.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.057 [2024-04-18 13:34:14.633259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.057 [2024-04-18 13:34:14.757076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.057 [2024-04-18 13:34:14.757132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.057 [2024-04-18 13:34:14.757185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.057 [2024-04-18 13:34:14.757188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.057 13:34:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.057 13:34:14 -- common/autotest_common.sh@850 -- # return 0 00:05:12.057 13:34:14 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.057 13:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.057 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.057 POWER: Env isn't set yet! 00:05:12.057 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:12.057 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:12.057 POWER: Cannot get available frequencies of lcore 0 00:05:12.057 POWER: Attempting to initialise PSTAT power management... 00:05:12.057 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:12.057 POWER: Initialized successfully for lcore 0 power management 00:05:12.057 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:12.057 POWER: Initialized successfully for lcore 1 power management 00:05:12.057 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:12.057 POWER: Initialized successfully for lcore 2 power management 00:05:12.057 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:12.057 POWER: Initialized successfully for lcore 3 power management 00:05:12.057 13:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.057 13:34:14 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.057 13:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.057 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.314 [2024-04-18 13:34:14.939610] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.314 13:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:14 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.315 13:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.315 13:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.315 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 ************************************ 00:05:12.315 START TEST scheduler_create_thread 00:05:12.315 ************************************ 00:05:12.315 13:34:15 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 2 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 3 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 4 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 5 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 6 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 7 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.315 8 00:05:12.315 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.315 13:34:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.315 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.315 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 9 00:05:12.573 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.573 13:34:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.573 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.573 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 10 00:05:12.573 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.573 13:34:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.573 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.573 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.573 13:34:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.573 13:34:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.573 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.573 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.573 13:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.573 13:34:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.573 13:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.573 13:34:15 -- common/autotest_common.sh@10 -- # set +x 00:05:13.947 13:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.947 13:34:16 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.947 13:34:16 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.947 13:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.947 13:34:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.879 13:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.879 00:05:14.879 real 0m2.618s 00:05:14.879 user 0m0.012s 00:05:14.879 sys 0m0.002s 00:05:14.879 13:34:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.879 13:34:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.879 ************************************ 00:05:14.879 END TEST scheduler_create_thread 00:05:14.879 ************************************ 00:05:14.879 13:34:17 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.879 13:34:17 -- scheduler/scheduler.sh@46 -- # killprocess 1023166 00:05:14.879 13:34:17 -- common/autotest_common.sh@936 -- # '[' -z 1023166 ']' 00:05:14.879 13:34:17 -- common/autotest_common.sh@940 -- # kill -0 1023166 00:05:14.879 13:34:17 -- common/autotest_common.sh@941 -- # uname 00:05:14.879 13:34:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.879 13:34:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1023166 00:05:15.136 13:34:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:15.136 13:34:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:15.137 13:34:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1023166' 00:05:15.137 killing process with pid 1023166 00:05:15.137 13:34:17 -- common/autotest_common.sh@955 -- # kill 1023166 00:05:15.137 13:34:17 -- common/autotest_common.sh@960 -- # wait 1023166 00:05:15.394 [2024-04-18 13:34:18.135054] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.651 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:15.651 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:15.651 POWER: Power management governor of lcore 1 has been set to 'userspace' successfully 00:05:15.651 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:15.651 POWER: Power management governor of lcore 2 has been set to 'userspace' successfully 00:05:15.651 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:15.651 POWER: Power management governor of lcore 3 has been set to 'userspace' successfully 00:05:15.651 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:15.651 00:05:15.651 real 0m4.004s 00:05:15.651 user 0m6.088s 00:05:15.651 sys 0m0.416s 00:05:15.651 13:34:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.651 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.651 ************************************ 00:05:15.651 END TEST event_scheduler 00:05:15.651 ************************************ 00:05:15.909 13:34:18 -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.909 13:34:18 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.909 13:34:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.909 13:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.909 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.909 ************************************ 00:05:15.909 START TEST app_repeat 00:05:15.909 ************************************ 00:05:15.909 13:34:18 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:15.909 13:34:18 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.909 13:34:18 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.909 13:34:18 -- event/event.sh@13 -- # local nbd_list 00:05:15.909 13:34:18 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.909 13:34:18 -- event/event.sh@14 -- # local bdev_list 00:05:15.909 13:34:18 -- event/event.sh@15 -- # local repeat_times=4 00:05:15.909 13:34:18 -- event/event.sh@17 -- # modprobe nbd 00:05:15.909 13:34:18 -- event/event.sh@19 -- # repeat_pid=1023641 00:05:15.909 13:34:18 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.909 13:34:18 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.909 13:34:18 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1023641' 00:05:15.909 Process app_repeat pid: 1023641 00:05:15.909 13:34:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.909 13:34:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.909 spdk_app_start Round 0 00:05:15.909 13:34:18 -- event/event.sh@25 -- # waitforlisten 1023641 /var/tmp/spdk-nbd.sock 00:05:15.909 13:34:18 -- common/autotest_common.sh@817 -- # '[' -z 1023641 ']' 00:05:15.909 13:34:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.909 13:34:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.909 13:34:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.909 13:34:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.909 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.909 [2024-04-18 13:34:18.628499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:15.909 [2024-04-18 13:34:18.628569] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023641 ] 00:05:15.909 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.909 [2024-04-18 13:34:18.708383] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.166 [2024-04-18 13:34:18.831840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.166 [2024-04-18 13:34:18.831847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.166 13:34:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.166 13:34:18 -- common/autotest_common.sh@850 -- # return 0 00:05:16.166 13:34:18 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.729 Malloc0 00:05:16.729 13:34:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.987 Malloc1 00:05:16.987 13:34:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.987 13:34:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.551 /dev/nbd0 00:05:17.551 13:34:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.551 13:34:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.551 13:34:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:17.551 13:34:20 -- common/autotest_common.sh@855 -- # local i 00:05:17.551 13:34:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:17.551 13:34:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:17.551 13:34:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:17.551 13:34:20 -- common/autotest_common.sh@859 -- # break 00:05:17.551 13:34:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:17.551 13:34:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:17.551 13:34:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.551 1+0 records in 00:05:17.551 1+0 records out 00:05:17.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178768 s, 22.9 MB/s 00:05:17.551 13:34:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.551 13:34:20 -- common/autotest_common.sh@872 -- # size=4096 00:05:17.551 13:34:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.551 13:34:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:17.551 13:34:20 -- common/autotest_common.sh@875 -- # return 0 00:05:17.551 13:34:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.551 13:34:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.551 13:34:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.809 /dev/nbd1 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.809 13:34:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:17.809 13:34:20 -- common/autotest_common.sh@855 -- # local i 00:05:17.809 13:34:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:17.809 13:34:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:17.809 13:34:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:17.809 13:34:20 -- common/autotest_common.sh@859 -- # break 00:05:17.809 13:34:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:17.809 13:34:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:17.809 13:34:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.809 1+0 records in 00:05:17.809 1+0 records out 00:05:17.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194312 s, 21.1 MB/s 00:05:17.809 13:34:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.809 13:34:20 -- common/autotest_common.sh@872 -- # size=4096 00:05:17.809 13:34:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.809 13:34:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:17.809 13:34:20 -- common/autotest_common.sh@875 -- # return 0 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.809 13:34:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.067 { 00:05:18.067 "nbd_device": "/dev/nbd0", 00:05:18.067 "bdev_name": "Malloc0" 00:05:18.067 }, 00:05:18.067 { 00:05:18.067 "nbd_device": "/dev/nbd1", 00:05:18.067 "bdev_name": "Malloc1" 00:05:18.067 } 00:05:18.067 ]' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.067 { 00:05:18.067 "nbd_device": "/dev/nbd0", 00:05:18.067 "bdev_name": "Malloc0" 00:05:18.067 }, 00:05:18.067 { 00:05:18.067 "nbd_device": "/dev/nbd1", 00:05:18.067 "bdev_name": "Malloc1" 00:05:18.067 } 00:05:18.067 ]' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.067 /dev/nbd1' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.067 /dev/nbd1' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.067 256+0 records in 00:05:18.067 256+0 records out 00:05:18.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526806 s, 199 MB/s 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.067 13:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.325 256+0 records in 00:05:18.325 256+0 records out 00:05:18.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027791 s, 37.7 MB/s 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.325 256+0 records in 00:05:18.325 256+0 records out 00:05:18.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269287 s, 38.9 MB/s 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@51 -- # local i 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.325 13:34:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@41 -- # break 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.583 13:34:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@41 -- # break 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.841 13:34:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@65 -- # true 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.099 13:34:21 -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.356 13:34:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.615 13:34:22 -- event/event.sh@35 -- # sleep 3 00:05:19.906 [2024-04-18 13:34:22.585472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.164 [2024-04-18 13:34:22.704495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.164 [2024-04-18 13:34:22.704497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.164 [2024-04-18 13:34:22.768262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.164 [2024-04-18 13:34:22.768337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.691 13:34:25 -- event/event.sh@23 -- # for i in {0..2} 00:05:22.691 13:34:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.691 spdk_app_start Round 1 00:05:22.691 13:34:25 -- event/event.sh@25 -- # waitforlisten 1023641 /var/tmp/spdk-nbd.sock 00:05:22.691 13:34:25 -- common/autotest_common.sh@817 -- # '[' -z 1023641 ']' 00:05:22.691 13:34:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.691 13:34:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:22.691 13:34:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.691 13:34:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:22.691 13:34:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.948 13:34:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.949 13:34:25 -- common/autotest_common.sh@850 -- # return 0 00:05:22.949 13:34:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.513 Malloc0 00:05:23.514 13:34:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.771 Malloc1 00:05:23.771 13:34:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@12 -- # local i 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.771 13:34:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.335 /dev/nbd0 00:05:24.335 13:34:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.335 13:34:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.335 13:34:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:24.335 13:34:26 -- common/autotest_common.sh@855 -- # local i 00:05:24.335 13:34:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:24.335 13:34:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:24.335 13:34:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:24.335 13:34:26 -- common/autotest_common.sh@859 -- # break 00:05:24.335 13:34:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:24.335 13:34:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:24.335 13:34:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.335 1+0 records in 00:05:24.335 1+0 records out 00:05:24.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173053 s, 23.7 MB/s 00:05:24.335 13:34:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:24.335 13:34:26 -- common/autotest_common.sh@872 -- # size=4096 00:05:24.335 13:34:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:24.335 13:34:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:24.335 13:34:26 -- common/autotest_common.sh@875 -- # return 0 00:05:24.335 13:34:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.335 13:34:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.335 13:34:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.592 /dev/nbd1 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.592 13:34:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:24.592 13:34:27 -- common/autotest_common.sh@855 -- # local i 00:05:24.592 13:34:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:24.592 13:34:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:24.592 13:34:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:24.592 13:34:27 -- common/autotest_common.sh@859 -- # break 00:05:24.592 13:34:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:24.592 13:34:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:24.592 13:34:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.592 1+0 records in 00:05:24.592 1+0 records out 00:05:24.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315714 s, 13.0 MB/s 00:05:24.592 13:34:27 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:24.592 13:34:27 -- common/autotest_common.sh@872 -- # size=4096 00:05:24.592 13:34:27 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:24.592 13:34:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:24.592 13:34:27 -- common/autotest_common.sh@875 -- # return 0 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.592 13:34:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.848 13:34:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.848 { 00:05:24.848 "nbd_device": "/dev/nbd0", 00:05:24.848 "bdev_name": "Malloc0" 00:05:24.848 }, 00:05:24.848 { 00:05:24.848 "nbd_device": "/dev/nbd1", 00:05:24.848 "bdev_name": "Malloc1" 00:05:24.848 } 00:05:24.848 ]' 00:05:24.848 13:34:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.848 { 00:05:24.848 "nbd_device": "/dev/nbd0", 00:05:24.848 "bdev_name": "Malloc0" 00:05:24.848 }, 00:05:24.849 { 00:05:24.849 "nbd_device": "/dev/nbd1", 00:05:24.849 "bdev_name": "Malloc1" 00:05:24.849 } 00:05:24.849 ]' 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.849 /dev/nbd1' 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.849 /dev/nbd1' 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.849 13:34:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.106 256+0 records in 00:05:25.106 256+0 records out 00:05:25.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564246 s, 186 MB/s 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.106 256+0 records in 00:05:25.106 256+0 records out 00:05:25.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252115 s, 41.6 MB/s 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.106 256+0 records in 00:05:25.106 256+0 records out 00:05:25.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300944 s, 34.8 MB/s 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@51 -- # local i 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.106 13:34:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@41 -- # break 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.364 13:34:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.622 13:34:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.622 13:34:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@41 -- # break 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.879 13:34:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.444 13:34:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.444 13:34:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.444 13:34:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@65 -- # true 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.444 13:34:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.444 13:34:29 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.702 13:34:29 -- event/event.sh@35 -- # sleep 3 00:05:26.960 [2024-04-18 13:34:29.699977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.219 [2024-04-18 13:34:29.818772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.219 [2024-04-18 13:34:29.818778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.219 [2024-04-18 13:34:29.883754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.219 [2024-04-18 13:34:29.883830] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.744 13:34:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:29.744 13:34:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.744 spdk_app_start Round 2 00:05:29.744 13:34:32 -- event/event.sh@25 -- # waitforlisten 1023641 /var/tmp/spdk-nbd.sock 00:05:29.744 13:34:32 -- common/autotest_common.sh@817 -- # '[' -z 1023641 ']' 00:05:29.744 13:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.744 13:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.744 13:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.744 13:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.744 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:05:30.000 13:34:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.000 13:34:32 -- common/autotest_common.sh@850 -- # return 0 00:05:30.000 13:34:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.564 Malloc0 00:05:30.565 13:34:33 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.129 Malloc1 00:05:31.129 13:34:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@12 -- # local i 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.129 13:34:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.386 /dev/nbd0 00:05:31.386 13:34:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.386 13:34:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.386 13:34:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:31.386 13:34:34 -- common/autotest_common.sh@855 -- # local i 00:05:31.386 13:34:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:31.386 13:34:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:31.386 13:34:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:31.386 13:34:34 -- common/autotest_common.sh@859 -- # break 00:05:31.386 13:34:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:31.386 13:34:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:31.386 13:34:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.386 1+0 records in 00:05:31.386 1+0 records out 00:05:31.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163949 s, 25.0 MB/s 00:05:31.386 13:34:34 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:31.386 13:34:34 -- common/autotest_common.sh@872 -- # size=4096 00:05:31.386 13:34:34 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:31.386 13:34:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:31.386 13:34:34 -- common/autotest_common.sh@875 -- # return 0 00:05:31.386 13:34:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.386 13:34:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.386 13:34:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.951 /dev/nbd1 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.951 13:34:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:31.951 13:34:34 -- common/autotest_common.sh@855 -- # local i 00:05:31.951 13:34:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:31.951 13:34:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:31.951 13:34:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:31.951 13:34:34 -- common/autotest_common.sh@859 -- # break 00:05:31.951 13:34:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:31.951 13:34:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:31.951 13:34:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.951 1+0 records in 00:05:31.951 1+0 records out 00:05:31.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189844 s, 21.6 MB/s 00:05:31.951 13:34:34 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:31.951 13:34:34 -- common/autotest_common.sh@872 -- # size=4096 00:05:31.951 13:34:34 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:31.951 13:34:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:31.951 13:34:34 -- common/autotest_common.sh@875 -- # return 0 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.951 13:34:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.209 { 00:05:32.209 "nbd_device": "/dev/nbd0", 00:05:32.209 "bdev_name": "Malloc0" 00:05:32.209 }, 00:05:32.209 { 00:05:32.209 "nbd_device": "/dev/nbd1", 00:05:32.209 "bdev_name": "Malloc1" 00:05:32.209 } 00:05:32.209 ]' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.209 { 00:05:32.209 "nbd_device": "/dev/nbd0", 00:05:32.209 "bdev_name": "Malloc0" 00:05:32.209 }, 00:05:32.209 { 00:05:32.209 "nbd_device": "/dev/nbd1", 00:05:32.209 "bdev_name": "Malloc1" 00:05:32.209 } 00:05:32.209 ]' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.209 /dev/nbd1' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.209 /dev/nbd1' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.209 256+0 records in 00:05:32.209 256+0 records out 00:05:32.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050665 s, 207 MB/s 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.209 256+0 records in 00:05:32.209 256+0 records out 00:05:32.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248224 s, 42.2 MB/s 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.209 256+0 records in 00:05:32.209 256+0 records out 00:05:32.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260334 s, 40.3 MB/s 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@51 -- # local i 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.209 13:34:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@41 -- # break 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.774 13:34:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@41 -- # break 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.339 13:34:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@65 -- # true 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.596 13:34:36 -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.596 13:34:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.160 13:34:36 -- event/event.sh@35 -- # sleep 3 00:05:34.417 [2024-04-18 13:34:37.033640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.417 [2024-04-18 13:34:37.154745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.417 [2024-04-18 13:34:37.154750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.417 [2024-04-18 13:34:37.213481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.417 [2024-04-18 13:34:37.213551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.970 13:34:39 -- event/event.sh@38 -- # waitforlisten 1023641 /var/tmp/spdk-nbd.sock 00:05:36.970 13:34:39 -- common/autotest_common.sh@817 -- # '[' -z 1023641 ']' 00:05:36.970 13:34:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.970 13:34:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.970 13:34:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.970 13:34:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.970 13:34:39 -- common/autotest_common.sh@10 -- # set +x 00:05:37.535 13:34:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.535 13:34:40 -- common/autotest_common.sh@850 -- # return 0 00:05:37.535 13:34:40 -- event/event.sh@39 -- # killprocess 1023641 00:05:37.535 13:34:40 -- common/autotest_common.sh@936 -- # '[' -z 1023641 ']' 00:05:37.535 13:34:40 -- common/autotest_common.sh@940 -- # kill -0 1023641 00:05:37.535 13:34:40 -- common/autotest_common.sh@941 -- # uname 00:05:37.535 13:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.535 13:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1023641 00:05:37.535 13:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.535 13:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.535 13:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1023641' 00:05:37.535 killing process with pid 1023641 00:05:37.535 13:34:40 -- common/autotest_common.sh@955 -- # kill 1023641 00:05:37.535 13:34:40 -- common/autotest_common.sh@960 -- # wait 1023641 00:05:37.793 spdk_app_start is called in Round 0. 00:05:37.793 Shutdown signal received, stop current app iteration 00:05:37.793 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:05:37.793 spdk_app_start is called in Round 1. 00:05:37.793 Shutdown signal received, stop current app iteration 00:05:37.793 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:05:37.793 spdk_app_start is called in Round 2. 00:05:37.793 Shutdown signal received, stop current app iteration 00:05:37.793 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:05:37.793 spdk_app_start is called in Round 3. 00:05:37.793 Shutdown signal received, stop current app iteration 00:05:37.793 13:34:40 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.793 13:34:40 -- event/event.sh@42 -- # return 0 00:05:37.793 00:05:37.793 real 0m21.823s 00:05:37.793 user 0m49.737s 00:05:37.793 sys 0m4.295s 00:05:37.793 13:34:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.793 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.793 ************************************ 00:05:37.793 END TEST app_repeat 00:05:37.793 ************************************ 00:05:37.793 13:34:40 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:37.793 13:34:40 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.793 13:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.793 13:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.793 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.793 ************************************ 00:05:37.793 START TEST cpu_locks 00:05:37.793 ************************************ 00:05:37.793 13:34:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.051 * Looking for test storage... 00:05:38.051 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:38.051 13:34:40 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.051 13:34:40 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.051 13:34:40 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.051 13:34:40 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.051 13:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.051 13:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.051 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.051 ************************************ 00:05:38.051 START TEST default_locks 00:05:38.051 ************************************ 00:05:38.051 13:34:40 -- common/autotest_common.sh@1111 -- # default_locks 00:05:38.051 13:34:40 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1026525 00:05:38.051 13:34:40 -- event/cpu_locks.sh@47 -- # waitforlisten 1026525 00:05:38.051 13:34:40 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.051 13:34:40 -- common/autotest_common.sh@817 -- # '[' -z 1026525 ']' 00:05:38.051 13:34:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.051 13:34:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.051 13:34:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.052 13:34:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.052 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.052 [2024-04-18 13:34:40.794637] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:38.052 [2024-04-18 13:34:40.794758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026525 ] 00:05:38.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.310 [2024-04-18 13:34:40.880557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.310 [2024-04-18 13:34:41.002277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.568 13:34:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.568 13:34:41 -- common/autotest_common.sh@850 -- # return 0 00:05:38.568 13:34:41 -- event/cpu_locks.sh@49 -- # locks_exist 1026525 00:05:38.568 13:34:41 -- event/cpu_locks.sh@22 -- # lslocks -p 1026525 00:05:38.568 13:34:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.134 lslocks: write error 00:05:39.134 13:34:41 -- event/cpu_locks.sh@50 -- # killprocess 1026525 00:05:39.134 13:34:41 -- common/autotest_common.sh@936 -- # '[' -z 1026525 ']' 00:05:39.134 13:34:41 -- common/autotest_common.sh@940 -- # kill -0 1026525 00:05:39.134 13:34:41 -- common/autotest_common.sh@941 -- # uname 00:05:39.134 13:34:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.134 13:34:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026525 00:05:39.134 13:34:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.134 13:34:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.134 13:34:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026525' 00:05:39.134 killing process with pid 1026525 00:05:39.134 13:34:41 -- common/autotest_common.sh@955 -- # kill 1026525 00:05:39.134 13:34:41 -- common/autotest_common.sh@960 -- # wait 1026525 00:05:39.699 13:34:42 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1026525 00:05:39.699 13:34:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:39.699 13:34:42 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1026525 00:05:39.699 13:34:42 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:39.699 13:34:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.699 13:34:42 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:39.699 13:34:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:39.699 13:34:42 -- common/autotest_common.sh@641 -- # waitforlisten 1026525 00:05:39.699 13:34:42 -- common/autotest_common.sh@817 -- # '[' -z 1026525 ']' 00:05:39.699 13:34:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.699 13:34:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.699 13:34:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.699 13:34:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.699 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.699 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1026525) - No such process 00:05:39.699 ERROR: process (pid: 1026525) is no longer running 00:05:39.699 13:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.699 13:34:42 -- common/autotest_common.sh@850 -- # return 1 00:05:39.699 13:34:42 -- common/autotest_common.sh@641 -- # es=1 00:05:39.699 13:34:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.699 13:34:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.699 13:34:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.699 13:34:42 -- event/cpu_locks.sh@54 -- # no_locks 00:05:39.699 13:34:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.699 13:34:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.699 13:34:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.699 00:05:39.699 real 0m1.476s 00:05:39.699 user 0m1.453s 00:05:39.699 sys 0m0.622s 00:05:39.699 13:34:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.699 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.699 ************************************ 00:05:39.699 END TEST default_locks 00:05:39.699 ************************************ 00:05:39.699 13:34:42 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:39.699 13:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.699 13:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.699 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.699 ************************************ 00:05:39.699 START TEST default_locks_via_rpc 00:05:39.699 ************************************ 00:05:39.699 13:34:42 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:39.699 13:34:42 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1026698 00:05:39.699 13:34:42 -- event/cpu_locks.sh@63 -- # waitforlisten 1026698 00:05:39.699 13:34:42 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.699 13:34:42 -- common/autotest_common.sh@817 -- # '[' -z 1026698 ']' 00:05:39.699 13:34:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.699 13:34:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.700 13:34:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.700 13:34:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.700 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.700 [2024-04-18 13:34:42.421782] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:39.700 [2024-04-18 13:34:42.421901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026698 ] 00:05:39.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.958 [2024-04-18 13:34:42.508576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.958 [2024-04-18 13:34:42.630230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.217 13:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.217 13:34:42 -- common/autotest_common.sh@850 -- # return 0 00:05:40.217 13:34:42 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:40.217 13:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.217 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.217 13:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.217 13:34:42 -- event/cpu_locks.sh@67 -- # no_locks 00:05:40.217 13:34:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.217 13:34:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.217 13:34:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.217 13:34:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.217 13:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.217 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.217 13:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.217 13:34:42 -- event/cpu_locks.sh@71 -- # locks_exist 1026698 00:05:40.217 13:34:42 -- event/cpu_locks.sh@22 -- # lslocks -p 1026698 00:05:40.217 13:34:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.479 13:34:43 -- event/cpu_locks.sh@73 -- # killprocess 1026698 00:05:40.479 13:34:43 -- common/autotest_common.sh@936 -- # '[' -z 1026698 ']' 00:05:40.479 13:34:43 -- common/autotest_common.sh@940 -- # kill -0 1026698 00:05:40.480 13:34:43 -- common/autotest_common.sh@941 -- # uname 00:05:40.480 13:34:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.480 13:34:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026698 00:05:40.480 13:34:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.480 13:34:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.480 13:34:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026698' 00:05:40.480 killing process with pid 1026698 00:05:40.480 13:34:43 -- common/autotest_common.sh@955 -- # kill 1026698 00:05:40.480 13:34:43 -- common/autotest_common.sh@960 -- # wait 1026698 00:05:41.049 00:05:41.049 real 0m1.356s 00:05:41.049 user 0m1.435s 00:05:41.049 sys 0m0.606s 00:05:41.049 13:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.049 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.049 ************************************ 00:05:41.049 END TEST default_locks_via_rpc 00:05:41.049 ************************************ 00:05:41.049 13:34:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:41.049 13:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.049 13:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.049 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.049 ************************************ 00:05:41.049 START TEST non_locking_app_on_locked_coremask 00:05:41.049 ************************************ 00:05:41.049 13:34:43 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:41.049 13:34:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1026900 00:05:41.049 13:34:43 -- event/cpu_locks.sh@81 -- # waitforlisten 1026900 /var/tmp/spdk.sock 00:05:41.049 13:34:43 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.049 13:34:43 -- common/autotest_common.sh@817 -- # '[' -z 1026900 ']' 00:05:41.049 13:34:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.049 13:34:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.049 13:34:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.049 13:34:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.049 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.307 [2024-04-18 13:34:43.899799] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:41.307 [2024-04-18 13:34:43.899889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026900 ] 00:05:41.307 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.307 [2024-04-18 13:34:43.978479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.307 [2024-04-18 13:34:44.102472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.873 13:34:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.873 13:34:44 -- common/autotest_common.sh@850 -- # return 0 00:05:41.873 13:34:44 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1026998 00:05:41.873 13:34:44 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.873 13:34:44 -- event/cpu_locks.sh@85 -- # waitforlisten 1026998 /var/tmp/spdk2.sock 00:05:41.873 13:34:44 -- common/autotest_common.sh@817 -- # '[' -z 1026998 ']' 00:05:41.873 13:34:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.873 13:34:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.873 13:34:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.873 13:34:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.873 13:34:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.873 [2024-04-18 13:34:44.439095] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:41.873 [2024-04-18 13:34:44.439179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026998 ] 00:05:41.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.873 [2024-04-18 13:34:44.554467] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.873 [2024-04-18 13:34:44.554500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.131 [2024-04-18 13:34:44.796639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.063 13:34:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.063 13:34:45 -- common/autotest_common.sh@850 -- # return 0 00:05:43.063 13:34:45 -- event/cpu_locks.sh@87 -- # locks_exist 1026900 00:05:43.063 13:34:45 -- event/cpu_locks.sh@22 -- # lslocks -p 1026900 00:05:43.063 13:34:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.996 lslocks: write error 00:05:43.996 13:34:46 -- event/cpu_locks.sh@89 -- # killprocess 1026900 00:05:43.996 13:34:46 -- common/autotest_common.sh@936 -- # '[' -z 1026900 ']' 00:05:43.996 13:34:46 -- common/autotest_common.sh@940 -- # kill -0 1026900 00:05:43.996 13:34:46 -- common/autotest_common.sh@941 -- # uname 00:05:43.996 13:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.996 13:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026900 00:05:44.254 13:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.254 13:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.254 13:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026900' 00:05:44.254 killing process with pid 1026900 00:05:44.254 13:34:46 -- common/autotest_common.sh@955 -- # kill 1026900 00:05:44.254 13:34:46 -- common/autotest_common.sh@960 -- # wait 1026900 00:05:45.187 13:34:47 -- event/cpu_locks.sh@90 -- # killprocess 1026998 00:05:45.187 13:34:47 -- common/autotest_common.sh@936 -- # '[' -z 1026998 ']' 00:05:45.187 13:34:47 -- common/autotest_common.sh@940 -- # kill -0 1026998 00:05:45.187 13:34:47 -- common/autotest_common.sh@941 -- # uname 00:05:45.187 13:34:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.187 13:34:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026998 00:05:45.187 13:34:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.187 13:34:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.187 13:34:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026998' 00:05:45.187 killing process with pid 1026998 00:05:45.187 13:34:47 -- common/autotest_common.sh@955 -- # kill 1026998 00:05:45.187 13:34:47 -- common/autotest_common.sh@960 -- # wait 1026998 00:05:45.753 00:05:45.753 real 0m4.449s 00:05:45.753 user 0m4.747s 00:05:45.753 sys 0m1.536s 00:05:45.753 13:34:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.753 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 ************************************ 00:05:45.753 END TEST non_locking_app_on_locked_coremask 00:05:45.753 ************************************ 00:05:45.753 13:34:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.753 13:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.753 13:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.753 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 ************************************ 00:05:45.753 START TEST locking_app_on_unlocked_coremask 00:05:45.753 ************************************ 00:05:45.753 13:34:48 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:45.753 13:34:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1027452 00:05:45.753 13:34:48 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.753 13:34:48 -- event/cpu_locks.sh@99 -- # waitforlisten 1027452 /var/tmp/spdk.sock 00:05:45.753 13:34:48 -- common/autotest_common.sh@817 -- # '[' -z 1027452 ']' 00:05:45.753 13:34:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.753 13:34:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.753 13:34:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.753 13:34:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.753 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 [2024-04-18 13:34:48.527070] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:45.753 [2024-04-18 13:34:48.527247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027452 ] 00:05:46.011 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.011 [2024-04-18 13:34:48.641860] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.011 [2024-04-18 13:34:48.641903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.011 [2024-04-18 13:34:48.761665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.268 13:34:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.268 13:34:49 -- common/autotest_common.sh@850 -- # return 0 00:05:46.268 13:34:49 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1027569 00:05:46.268 13:34:49 -- event/cpu_locks.sh@103 -- # waitforlisten 1027569 /var/tmp/spdk2.sock 00:05:46.268 13:34:49 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.268 13:34:49 -- common/autotest_common.sh@817 -- # '[' -z 1027569 ']' 00:05:46.268 13:34:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.268 13:34:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.268 13:34:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.268 13:34:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.268 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.524 [2024-04-18 13:34:49.089130] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:46.524 [2024-04-18 13:34:49.089217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027569 ] 00:05:46.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.524 [2024-04-18 13:34:49.202312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.782 [2024-04-18 13:34:49.448364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.352 13:34:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.352 13:34:50 -- common/autotest_common.sh@850 -- # return 0 00:05:47.352 13:34:50 -- event/cpu_locks.sh@105 -- # locks_exist 1027569 00:05:47.352 13:34:50 -- event/cpu_locks.sh@22 -- # lslocks -p 1027569 00:05:47.352 13:34:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.724 lslocks: write error 00:05:48.724 13:34:51 -- event/cpu_locks.sh@107 -- # killprocess 1027452 00:05:48.724 13:34:51 -- common/autotest_common.sh@936 -- # '[' -z 1027452 ']' 00:05:48.724 13:34:51 -- common/autotest_common.sh@940 -- # kill -0 1027452 00:05:48.724 13:34:51 -- common/autotest_common.sh@941 -- # uname 00:05:48.724 13:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.724 13:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1027452 00:05:48.724 13:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.724 13:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.724 13:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1027452' 00:05:48.724 killing process with pid 1027452 00:05:48.724 13:34:51 -- common/autotest_common.sh@955 -- # kill 1027452 00:05:48.724 13:34:51 -- common/autotest_common.sh@960 -- # wait 1027452 00:05:49.730 13:34:52 -- event/cpu_locks.sh@108 -- # killprocess 1027569 00:05:49.730 13:34:52 -- common/autotest_common.sh@936 -- # '[' -z 1027569 ']' 00:05:49.730 13:34:52 -- common/autotest_common.sh@940 -- # kill -0 1027569 00:05:49.730 13:34:52 -- common/autotest_common.sh@941 -- # uname 00:05:49.730 13:34:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.730 13:34:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1027569 00:05:49.730 13:34:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.730 13:34:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.730 13:34:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1027569' 00:05:49.730 killing process with pid 1027569 00:05:49.730 13:34:52 -- common/autotest_common.sh@955 -- # kill 1027569 00:05:49.730 13:34:52 -- common/autotest_common.sh@960 -- # wait 1027569 00:05:50.312 00:05:50.312 real 0m4.424s 00:05:50.312 user 0m4.757s 00:05:50.312 sys 0m1.471s 00:05:50.312 13:34:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.312 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:05:50.312 ************************************ 00:05:50.312 END TEST locking_app_on_unlocked_coremask 00:05:50.312 ************************************ 00:05:50.312 13:34:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:50.312 13:34:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.312 13:34:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.312 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:05:50.312 ************************************ 00:05:50.312 START TEST locking_app_on_locked_coremask 00:05:50.312 ************************************ 00:05:50.312 13:34:52 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:50.312 13:34:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1028027 00:05:50.312 13:34:52 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.312 13:34:52 -- event/cpu_locks.sh@116 -- # waitforlisten 1028027 /var/tmp/spdk.sock 00:05:50.313 13:34:52 -- common/autotest_common.sh@817 -- # '[' -z 1028027 ']' 00:05:50.313 13:34:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.313 13:34:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.313 13:34:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.313 13:34:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.313 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:05:50.313 [2024-04-18 13:34:53.040684] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:50.313 [2024-04-18 13:34:53.040788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028027 ] 00:05:50.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.570 [2024-04-18 13:34:53.121333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.570 [2024-04-18 13:34:53.241898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.827 13:34:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.827 13:34:53 -- common/autotest_common.sh@850 -- # return 0 00:05:50.827 13:34:53 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1028145 00:05:50.827 13:34:53 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.827 13:34:53 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1028145 /var/tmp/spdk2.sock 00:05:50.827 13:34:53 -- common/autotest_common.sh@638 -- # local es=0 00:05:50.827 13:34:53 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1028145 /var/tmp/spdk2.sock 00:05:50.827 13:34:53 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:50.827 13:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.827 13:34:53 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:50.827 13:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.827 13:34:53 -- common/autotest_common.sh@641 -- # waitforlisten 1028145 /var/tmp/spdk2.sock 00:05:50.827 13:34:53 -- common/autotest_common.sh@817 -- # '[' -z 1028145 ']' 00:05:50.827 13:34:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.827 13:34:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.827 13:34:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.827 13:34:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.827 13:34:53 -- common/autotest_common.sh@10 -- # set +x 00:05:50.827 [2024-04-18 13:34:53.569435] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:50.827 [2024-04-18 13:34:53.569536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028145 ] 00:05:50.827 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.084 [2024-04-18 13:34:53.689349] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1028027 has claimed it. 00:05:51.084 [2024-04-18 13:34:53.689405] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.649 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1028145) - No such process 00:05:51.649 ERROR: process (pid: 1028145) is no longer running 00:05:51.649 13:34:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.649 13:34:54 -- common/autotest_common.sh@850 -- # return 1 00:05:51.649 13:34:54 -- common/autotest_common.sh@641 -- # es=1 00:05:51.649 13:34:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:51.649 13:34:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:51.649 13:34:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:51.649 13:34:54 -- event/cpu_locks.sh@122 -- # locks_exist 1028027 00:05:51.649 13:34:54 -- event/cpu_locks.sh@22 -- # lslocks -p 1028027 00:05:51.649 13:34:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.213 lslocks: write error 00:05:52.213 13:34:54 -- event/cpu_locks.sh@124 -- # killprocess 1028027 00:05:52.213 13:34:54 -- common/autotest_common.sh@936 -- # '[' -z 1028027 ']' 00:05:52.213 13:34:54 -- common/autotest_common.sh@940 -- # kill -0 1028027 00:05:52.213 13:34:54 -- common/autotest_common.sh@941 -- # uname 00:05:52.213 13:34:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.213 13:34:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1028027 00:05:52.213 13:34:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.213 13:34:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.213 13:34:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1028027' 00:05:52.213 killing process with pid 1028027 00:05:52.213 13:34:54 -- common/autotest_common.sh@955 -- # kill 1028027 00:05:52.213 13:34:54 -- common/autotest_common.sh@960 -- # wait 1028027 00:05:52.778 00:05:52.778 real 0m2.290s 00:05:52.778 user 0m2.532s 00:05:52.778 sys 0m0.759s 00:05:52.778 13:34:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.778 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.778 ************************************ 00:05:52.778 END TEST locking_app_on_locked_coremask 00:05:52.778 ************************************ 00:05:52.778 13:34:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.778 13:34:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.778 13:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.778 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.778 ************************************ 00:05:52.778 START TEST locking_overlapped_coremask 00:05:52.778 ************************************ 00:05:52.778 13:34:55 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:52.778 13:34:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1028428 00:05:52.778 13:34:55 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.778 13:34:55 -- event/cpu_locks.sh@133 -- # waitforlisten 1028428 /var/tmp/spdk.sock 00:05:52.778 13:34:55 -- common/autotest_common.sh@817 -- # '[' -z 1028428 ']' 00:05:52.778 13:34:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.778 13:34:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.778 13:34:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.778 13:34:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.778 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.778 [2024-04-18 13:34:55.479159] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:52.778 [2024-04-18 13:34:55.479270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028428 ] 00:05:52.778 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.778 [2024-04-18 13:34:55.565288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.035 [2024-04-18 13:34:55.690533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.036 [2024-04-18 13:34:55.690589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.036 [2024-04-18 13:34:55.690592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.293 13:34:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.293 13:34:55 -- common/autotest_common.sh@850 -- # return 0 00:05:53.293 13:34:55 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1028457 00:05:53.293 13:34:55 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1028457 /var/tmp/spdk2.sock 00:05:53.293 13:34:55 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.293 13:34:55 -- common/autotest_common.sh@638 -- # local es=0 00:05:53.293 13:34:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1028457 /var/tmp/spdk2.sock 00:05:53.293 13:34:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:53.293 13:34:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.293 13:34:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:53.293 13:34:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.293 13:34:55 -- common/autotest_common.sh@641 -- # waitforlisten 1028457 /var/tmp/spdk2.sock 00:05:53.293 13:34:55 -- common/autotest_common.sh@817 -- # '[' -z 1028457 ']' 00:05:53.293 13:34:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.293 13:34:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.293 13:34:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.293 13:34:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.293 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:05:53.293 [2024-04-18 13:34:56.020709] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:53.293 [2024-04-18 13:34:56.020803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028457 ] 00:05:53.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.550 [2024-04-18 13:34:56.135318] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1028428 has claimed it. 00:05:53.550 [2024-04-18 13:34:56.135384] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.114 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1028457) - No such process 00:05:54.114 ERROR: process (pid: 1028457) is no longer running 00:05:54.114 13:34:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.114 13:34:56 -- common/autotest_common.sh@850 -- # return 1 00:05:54.114 13:34:56 -- common/autotest_common.sh@641 -- # es=1 00:05:54.114 13:34:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:54.114 13:34:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:54.114 13:34:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:54.114 13:34:56 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.114 13:34:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.114 13:34:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.114 13:34:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.114 13:34:56 -- event/cpu_locks.sh@141 -- # killprocess 1028428 00:05:54.114 13:34:56 -- common/autotest_common.sh@936 -- # '[' -z 1028428 ']' 00:05:54.114 13:34:56 -- common/autotest_common.sh@940 -- # kill -0 1028428 00:05:54.114 13:34:56 -- common/autotest_common.sh@941 -- # uname 00:05:54.114 13:34:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.114 13:34:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1028428 00:05:54.114 13:34:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.114 13:34:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.114 13:34:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1028428' 00:05:54.114 killing process with pid 1028428 00:05:54.114 13:34:56 -- common/autotest_common.sh@955 -- # kill 1028428 00:05:54.114 13:34:56 -- common/autotest_common.sh@960 -- # wait 1028428 00:05:54.679 00:05:54.679 real 0m1.830s 00:05:54.679 user 0m4.745s 00:05:54.679 sys 0m0.545s 00:05:54.679 13:34:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.679 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.679 ************************************ 00:05:54.679 END TEST locking_overlapped_coremask 00:05:54.679 ************************************ 00:05:54.679 13:34:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.679 13:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.679 13:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.679 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.679 ************************************ 00:05:54.679 START TEST locking_overlapped_coremask_via_rpc 00:05:54.679 ************************************ 00:05:54.679 13:34:57 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:54.679 13:34:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1028630 00:05:54.679 13:34:57 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.679 13:34:57 -- event/cpu_locks.sh@149 -- # waitforlisten 1028630 /var/tmp/spdk.sock 00:05:54.679 13:34:57 -- common/autotest_common.sh@817 -- # '[' -z 1028630 ']' 00:05:54.679 13:34:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.679 13:34:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.679 13:34:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.679 13:34:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.679 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.679 [2024-04-18 13:34:57.457077] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:54.679 [2024-04-18 13:34:57.457175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028630 ] 00:05:54.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.936 [2024-04-18 13:34:57.542035] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.936 [2024-04-18 13:34:57.542071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.936 [2024-04-18 13:34:57.666971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.936 [2024-04-18 13:34:57.667022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.936 [2024-04-18 13:34:57.667027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.194 13:34:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.194 13:34:57 -- common/autotest_common.sh@850 -- # return 0 00:05:55.194 13:34:57 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1028761 00:05:55.194 13:34:57 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.194 13:34:57 -- event/cpu_locks.sh@153 -- # waitforlisten 1028761 /var/tmp/spdk2.sock 00:05:55.194 13:34:57 -- common/autotest_common.sh@817 -- # '[' -z 1028761 ']' 00:05:55.194 13:34:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.194 13:34:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.194 13:34:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.194 13:34:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.194 13:34:57 -- common/autotest_common.sh@10 -- # set +x 00:05:55.451 [2024-04-18 13:34:58.011302] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:55.451 [2024-04-18 13:34:58.011413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028761 ] 00:05:55.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.451 [2024-04-18 13:34:58.132543] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.451 [2024-04-18 13:34:58.132592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.709 [2024-04-18 13:34:58.379835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.709 [2024-04-18 13:34:58.379889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.709 [2024-04-18 13:34:58.379892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.642 13:34:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.642 13:34:59 -- common/autotest_common.sh@850 -- # return 0 00:05:56.642 13:34:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.642 13:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:56.642 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:05:56.642 13:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:56.642 13:34:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.642 13:34:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:56.642 13:34:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.642 13:34:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:56.642 13:34:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.642 13:34:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:56.642 13:34:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.642 13:34:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.642 13:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:56.642 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:05:56.642 [2024-04-18 13:34:59.128040] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1028630 has claimed it. 00:05:56.642 request: 00:05:56.642 { 00:05:56.642 "method": "framework_enable_cpumask_locks", 00:05:56.642 "req_id": 1 00:05:56.642 } 00:05:56.642 Got JSON-RPC error response 00:05:56.642 response: 00:05:56.642 { 00:05:56.642 "code": -32603, 00:05:56.642 "message": "Failed to claim CPU core: 2" 00:05:56.642 } 00:05:56.642 13:34:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:56.642 13:34:59 -- common/autotest_common.sh@641 -- # es=1 00:05:56.642 13:34:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.642 13:34:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:56.642 13:34:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.642 13:34:59 -- event/cpu_locks.sh@158 -- # waitforlisten 1028630 /var/tmp/spdk.sock 00:05:56.642 13:34:59 -- common/autotest_common.sh@817 -- # '[' -z 1028630 ']' 00:05:56.642 13:34:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.642 13:34:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.642 13:34:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.642 13:34:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.642 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:05:56.899 13:34:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.899 13:34:59 -- common/autotest_common.sh@850 -- # return 0 00:05:56.899 13:34:59 -- event/cpu_locks.sh@159 -- # waitforlisten 1028761 /var/tmp/spdk2.sock 00:05:56.899 13:34:59 -- common/autotest_common.sh@817 -- # '[' -z 1028761 ']' 00:05:56.899 13:34:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.899 13:34:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.899 13:34:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.899 13:34:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.899 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:05:57.155 13:34:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.155 13:34:59 -- common/autotest_common.sh@850 -- # return 0 00:05:57.155 13:34:59 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.155 13:34:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.155 13:34:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.155 13:34:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.155 00:05:57.155 real 0m2.416s 00:05:57.155 user 0m1.401s 00:05:57.155 sys 0m0.228s 00:05:57.155 13:34:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.155 13:34:59 -- common/autotest_common.sh@10 -- # set +x 00:05:57.155 ************************************ 00:05:57.155 END TEST locking_overlapped_coremask_via_rpc 00:05:57.155 ************************************ 00:05:57.155 13:34:59 -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.155 13:34:59 -- event/cpu_locks.sh@15 -- # [[ -z 1028630 ]] 00:05:57.155 13:34:59 -- event/cpu_locks.sh@15 -- # killprocess 1028630 00:05:57.155 13:34:59 -- common/autotest_common.sh@936 -- # '[' -z 1028630 ']' 00:05:57.155 13:34:59 -- common/autotest_common.sh@940 -- # kill -0 1028630 00:05:57.155 13:34:59 -- common/autotest_common.sh@941 -- # uname 00:05:57.156 13:34:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.156 13:34:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1028630 00:05:57.156 13:34:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.156 13:34:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.156 13:34:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1028630' 00:05:57.156 killing process with pid 1028630 00:05:57.156 13:34:59 -- common/autotest_common.sh@955 -- # kill 1028630 00:05:57.156 13:34:59 -- common/autotest_common.sh@960 -- # wait 1028630 00:05:57.719 13:35:00 -- event/cpu_locks.sh@16 -- # [[ -z 1028761 ]] 00:05:57.719 13:35:00 -- event/cpu_locks.sh@16 -- # killprocess 1028761 00:05:57.719 13:35:00 -- common/autotest_common.sh@936 -- # '[' -z 1028761 ']' 00:05:57.719 13:35:00 -- common/autotest_common.sh@940 -- # kill -0 1028761 00:05:57.719 13:35:00 -- common/autotest_common.sh@941 -- # uname 00:05:57.719 13:35:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.719 13:35:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1028761 00:05:57.719 13:35:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:57.719 13:35:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:57.719 13:35:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1028761' 00:05:57.719 killing process with pid 1028761 00:05:57.719 13:35:00 -- common/autotest_common.sh@955 -- # kill 1028761 00:05:57.719 13:35:00 -- common/autotest_common.sh@960 -- # wait 1028761 00:05:58.284 13:35:00 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.284 13:35:00 -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.284 13:35:00 -- event/cpu_locks.sh@15 -- # [[ -z 1028630 ]] 00:05:58.284 13:35:00 -- event/cpu_locks.sh@15 -- # killprocess 1028630 00:05:58.284 13:35:00 -- common/autotest_common.sh@936 -- # '[' -z 1028630 ']' 00:05:58.284 13:35:00 -- common/autotest_common.sh@940 -- # kill -0 1028630 00:05:58.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1028630) - No such process 00:05:58.284 13:35:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1028630 is not found' 00:05:58.284 Process with pid 1028630 is not found 00:05:58.284 13:35:00 -- event/cpu_locks.sh@16 -- # [[ -z 1028761 ]] 00:05:58.284 13:35:00 -- event/cpu_locks.sh@16 -- # killprocess 1028761 00:05:58.284 13:35:00 -- common/autotest_common.sh@936 -- # '[' -z 1028761 ']' 00:05:58.284 13:35:00 -- common/autotest_common.sh@940 -- # kill -0 1028761 00:05:58.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1028761) - No such process 00:05:58.284 13:35:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1028761 is not found' 00:05:58.284 Process with pid 1028761 is not found 00:05:58.284 13:35:00 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.284 00:05:58.284 real 0m20.340s 00:05:58.284 user 0m34.305s 00:05:58.284 sys 0m7.107s 00:05:58.284 13:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.284 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 ************************************ 00:05:58.284 END TEST cpu_locks 00:05:58.284 ************************************ 00:05:58.284 00:05:58.284 real 0m51.071s 00:05:58.284 user 1m37.199s 00:05:58.284 sys 0m12.596s 00:05:58.284 13:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.284 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 ************************************ 00:05:58.284 END TEST event 00:05:58.284 ************************************ 00:05:58.284 13:35:00 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:58.284 13:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.284 13:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.284 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:05:58.284 ************************************ 00:05:58.284 START TEST thread 00:05:58.284 ************************************ 00:05:58.284 13:35:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:58.542 * Looking for test storage... 00:05:58.542 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:58.542 13:35:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.542 13:35:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:58.542 13:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.542 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.542 ************************************ 00:05:58.542 START TEST thread_poller_perf 00:05:58.542 ************************************ 00:05:58.542 13:35:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.542 [2024-04-18 13:35:01.238442] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:58.542 [2024-04-18 13:35:01.238506] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029267 ] 00:05:58.542 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.542 [2024-04-18 13:35:01.317040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.799 [2024-04-18 13:35:01.440161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.799 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:00.172 ====================================== 00:06:00.172 busy:2709464792 (cyc) 00:06:00.172 total_run_count: 291000 00:06:00.172 tsc_hz: 2700000000 (cyc) 00:06:00.172 ====================================== 00:06:00.172 poller_cost: 9310 (cyc), 3448 (nsec) 00:06:00.172 00:06:00.172 real 0m1.354s 00:06:00.172 user 0m1.248s 00:06:00.172 sys 0m0.100s 00:06:00.172 13:35:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.172 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.172 ************************************ 00:06:00.172 END TEST thread_poller_perf 00:06:00.172 ************************************ 00:06:00.172 13:35:02 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.172 13:35:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:00.172 13:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.172 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.172 ************************************ 00:06:00.172 START TEST thread_poller_perf 00:06:00.172 ************************************ 00:06:00.172 13:35:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.172 [2024-04-18 13:35:02.721213] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:00.172 [2024-04-18 13:35:02.721277] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029431 ] 00:06:00.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.172 [2024-04-18 13:35:02.799769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.172 [2024-04-18 13:35:02.923053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.172 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.544 ====================================== 00:06:01.544 busy:2702889916 (cyc) 00:06:01.544 total_run_count: 3833000 00:06:01.544 tsc_hz: 2700000000 (cyc) 00:06:01.544 ====================================== 00:06:01.544 poller_cost: 705 (cyc), 261 (nsec) 00:06:01.544 00:06:01.544 real 0m1.347s 00:06:01.544 user 0m1.242s 00:06:01.544 sys 0m0.100s 00:06:01.544 13:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.544 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.544 ************************************ 00:06:01.544 END TEST thread_poller_perf 00:06:01.544 ************************************ 00:06:01.544 13:35:04 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.544 00:06:01.544 real 0m3.029s 00:06:01.544 user 0m2.616s 00:06:01.544 sys 0m0.390s 00:06:01.544 13:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.544 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.544 ************************************ 00:06:01.544 END TEST thread 00:06:01.544 ************************************ 00:06:01.544 13:35:04 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:01.544 13:35:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.544 13:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.544 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.544 ************************************ 00:06:01.544 START TEST accel 00:06:01.544 ************************************ 00:06:01.544 13:35:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:01.544 * Looking for test storage... 00:06:01.544 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:01.545 13:35:04 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:01.545 13:35:04 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:01.545 13:35:04 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.545 13:35:04 -- accel/accel.sh@62 -- # spdk_tgt_pid=1029639 00:06:01.545 13:35:04 -- accel/accel.sh@63 -- # waitforlisten 1029639 00:06:01.545 13:35:04 -- common/autotest_common.sh@817 -- # '[' -z 1029639 ']' 00:06:01.545 13:35:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.545 13:35:04 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:01.545 13:35:04 -- accel/accel.sh@61 -- # build_accel_config 00:06:01.545 13:35:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.545 13:35:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.545 13:35:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.545 13:35:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.545 13:35:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.545 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.545 13:35:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.545 13:35:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.545 13:35:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.545 13:35:04 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.545 13:35:04 -- accel/accel.sh@41 -- # jq -r . 00:06:01.545 [2024-04-18 13:35:04.332294] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:01.545 [2024-04-18 13:35:04.332386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029639 ] 00:06:01.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.802 [2024-04-18 13:35:04.431515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.802 [2024-04-18 13:35:04.554119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.064 13:35:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.064 13:35:04 -- common/autotest_common.sh@850 -- # return 0 00:06:02.064 13:35:04 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:02.064 13:35:04 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:02.064 13:35:04 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:02.064 13:35:04 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:02.064 13:35:04 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:02.064 13:35:04 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:02.064 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.064 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.064 13:35:04 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:02.064 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.321 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.321 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.321 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.322 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.322 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.322 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.322 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # IFS== 00:06:02.322 13:35:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:02.322 13:35:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.322 13:35:04 -- accel/accel.sh@75 -- # killprocess 1029639 00:06:02.322 13:35:04 -- common/autotest_common.sh@936 -- # '[' -z 1029639 ']' 00:06:02.322 13:35:04 -- common/autotest_common.sh@940 -- # kill -0 1029639 00:06:02.322 13:35:04 -- common/autotest_common.sh@941 -- # uname 00:06:02.322 13:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.322 13:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1029639 00:06:02.322 13:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.322 13:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.322 13:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1029639' 00:06:02.322 killing process with pid 1029639 00:06:02.322 13:35:04 -- common/autotest_common.sh@955 -- # kill 1029639 00:06:02.322 13:35:04 -- common/autotest_common.sh@960 -- # wait 1029639 00:06:02.889 13:35:05 -- accel/accel.sh@76 -- # trap - ERR 00:06:02.889 13:35:05 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:02.889 13:35:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:02.889 13:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.889 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 13:35:05 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:02.889 13:35:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:02.889 13:35:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.889 13:35:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.889 13:35:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.889 13:35:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.889 13:35:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.889 13:35:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.889 13:35:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.889 13:35:05 -- accel/accel.sh@41 -- # jq -r . 00:06:02.889 13:35:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.889 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 13:35:05 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:02.889 13:35:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.889 13:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.889 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.146 ************************************ 00:06:03.146 START TEST accel_missing_filename 00:06:03.146 ************************************ 00:06:03.146 13:35:05 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:03.146 13:35:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.146 13:35:05 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:03.146 13:35:05 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:03.146 13:35:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.146 13:35:05 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:03.146 13:35:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.146 13:35:05 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:03.146 13:35:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:03.146 13:35:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.146 13:35:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.146 13:35:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.146 13:35:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.146 13:35:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.146 13:35:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.146 13:35:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.146 13:35:05 -- accel/accel.sh@41 -- # jq -r . 00:06:03.146 [2024-04-18 13:35:05.782100] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:03.146 [2024-04-18 13:35:05.782177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029941 ] 00:06:03.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.146 [2024-04-18 13:35:05.905405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.407 [2024-04-18 13:35:06.028871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.407 [2024-04-18 13:35:06.093794] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.407 [2024-04-18 13:35:06.183972] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:03.698 A filename is required. 00:06:03.698 13:35:06 -- common/autotest_common.sh@641 -- # es=234 00:06:03.698 13:35:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:03.698 13:35:06 -- common/autotest_common.sh@650 -- # es=106 00:06:03.698 13:35:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:03.698 13:35:06 -- common/autotest_common.sh@658 -- # es=1 00:06:03.698 13:35:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:03.698 00:06:03.698 real 0m0.563s 00:06:03.698 user 0m0.393s 00:06:03.698 sys 0m0.204s 00:06:03.698 13:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.698 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.698 ************************************ 00:06:03.698 END TEST accel_missing_filename 00:06:03.698 ************************************ 00:06:03.698 13:35:06 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:03.698 13:35:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:03.698 13:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.698 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.698 ************************************ 00:06:03.698 START TEST accel_compress_verify 00:06:03.698 ************************************ 00:06:03.698 13:35:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:03.698 13:35:06 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.698 13:35:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:03.698 13:35:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:03.698 13:35:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.698 13:35:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:03.698 13:35:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.698 13:35:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:03.698 13:35:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:03.698 13:35:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.698 13:35:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.698 13:35:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.698 13:35:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.698 13:35:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.698 13:35:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.698 13:35:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.698 13:35:06 -- accel/accel.sh@41 -- # jq -r . 00:06:03.698 [2024-04-18 13:35:06.475814] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:03.698 [2024-04-18 13:35:06.475885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029978 ] 00:06:03.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.956 [2024-04-18 13:35:06.552142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.956 [2024-04-18 13:35:06.677611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.956 [2024-04-18 13:35:06.743849] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.213 [2024-04-18 13:35:06.829930] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:04.213 00:06:04.213 Compression does not support the verify option, aborting. 00:06:04.213 13:35:06 -- common/autotest_common.sh@641 -- # es=161 00:06:04.213 13:35:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.213 13:35:06 -- common/autotest_common.sh@650 -- # es=33 00:06:04.213 13:35:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:04.213 13:35:06 -- common/autotest_common.sh@658 -- # es=1 00:06:04.213 13:35:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.213 00:06:04.213 real 0m0.504s 00:06:04.213 user 0m0.396s 00:06:04.213 sys 0m0.150s 00:06:04.213 13:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.213 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.213 ************************************ 00:06:04.213 END TEST accel_compress_verify 00:06:04.213 ************************************ 00:06:04.213 13:35:06 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:04.213 13:35:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.213 13:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.213 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.470 ************************************ 00:06:04.471 START TEST accel_wrong_workload 00:06:04.471 ************************************ 00:06:04.471 13:35:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:04.471 13:35:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:04.471 13:35:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:04.471 13:35:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.471 13:35:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:04.471 13:35:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:04.471 13:35:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.471 13:35:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.471 13:35:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.471 13:35:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.471 13:35:07 -- accel/accel.sh@41 -- # jq -r . 00:06:04.471 Unsupported workload type: foobar 00:06:04.471 [2024-04-18 13:35:07.122449] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:04.471 accel_perf options: 00:06:04.471 [-h help message] 00:06:04.471 [-q queue depth per core] 00:06:04.471 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.471 [-T number of threads per core 00:06:04.471 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.471 [-t time in seconds] 00:06:04.471 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.471 [ dif_verify, , dif_generate, dif_generate_copy 00:06:04.471 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.471 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.471 [-S for crc32c workload, use this seed value (default 0) 00:06:04.471 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.471 [-f for fill workload, use this BYTE value (default 255) 00:06:04.471 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.471 [-y verify result if this switch is on] 00:06:04.471 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.471 Can be used to spread operations across a wider range of memory. 00:06:04.471 13:35:07 -- common/autotest_common.sh@641 -- # es=1 00:06:04.471 13:35:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.471 13:35:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:04.471 13:35:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.471 00:06:04.471 real 0m0.022s 00:06:04.471 user 0m0.012s 00:06:04.471 sys 0m0.010s 00:06:04.471 13:35:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.471 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.471 ************************************ 00:06:04.471 END TEST accel_wrong_workload 00:06:04.471 ************************************ 00:06:04.471 Error: writing output failed: Broken pipe 00:06:04.471 13:35:07 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.471 13:35:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:04.471 13:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.471 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.471 ************************************ 00:06:04.471 START TEST accel_negative_buffers 00:06:04.471 ************************************ 00:06:04.471 13:35:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.471 13:35:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:04.471 13:35:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:04.471 13:35:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:04.471 13:35:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.471 13:35:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:04.471 13:35:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:04.471 13:35:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.471 13:35:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.471 13:35:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.471 13:35:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.471 13:35:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.471 13:35:07 -- accel/accel.sh@41 -- # jq -r . 00:06:04.471 -x option must be non-negative. 00:06:04.471 [2024-04-18 13:35:07.272883] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:04.728 accel_perf options: 00:06:04.728 [-h help message] 00:06:04.728 [-q queue depth per core] 00:06:04.728 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.728 [-T number of threads per core 00:06:04.728 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.728 [-t time in seconds] 00:06:04.728 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.728 [ dif_verify, , dif_generate, dif_generate_copy 00:06:04.728 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.728 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.728 [-S for crc32c workload, use this seed value (default 0) 00:06:04.728 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.728 [-f for fill workload, use this BYTE value (default 255) 00:06:04.728 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.728 [-y verify result if this switch is on] 00:06:04.728 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.728 Can be used to spread operations across a wider range of memory. 00:06:04.728 13:35:07 -- common/autotest_common.sh@641 -- # es=1 00:06:04.728 13:35:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.728 13:35:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:04.728 13:35:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.728 00:06:04.728 real 0m0.025s 00:06:04.728 user 0m0.013s 00:06:04.728 sys 0m0.013s 00:06:04.728 13:35:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.728 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.728 ************************************ 00:06:04.728 END TEST accel_negative_buffers 00:06:04.728 ************************************ 00:06:04.728 Error: writing output failed: Broken pipe 00:06:04.728 13:35:07 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:04.728 13:35:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:04.728 13:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.728 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.728 ************************************ 00:06:04.728 START TEST accel_crc32c 00:06:04.728 ************************************ 00:06:04.728 13:35:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:04.728 13:35:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.728 13:35:07 -- accel/accel.sh@17 -- # local accel_module 00:06:04.728 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.728 13:35:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:04.728 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.728 13:35:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:04.728 13:35:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.728 13:35:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.728 13:35:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.728 13:35:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.728 13:35:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.728 13:35:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.728 13:35:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.728 13:35:07 -- accel/accel.sh@41 -- # jq -r . 00:06:04.728 [2024-04-18 13:35:07.458422] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:04.728 [2024-04-18 13:35:07.458499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030186 ] 00:06:04.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.987 [2024-04-18 13:35:07.550523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.987 [2024-04-18 13:35:07.672614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=0x1 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=crc32c 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=32 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=software 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=32 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=32 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=1 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val=Yes 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:04.987 13:35:07 -- accel/accel.sh@20 -- # val= 00:06:04.987 13:35:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # IFS=: 00:06:04.987 13:35:07 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@20 -- # val= 00:06:06.358 13:35:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:08 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.358 13:35:08 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.358 13:35:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.358 00:06:06.358 real 0m1.515s 00:06:06.358 user 0m1.353s 00:06:06.358 sys 0m0.163s 00:06:06.358 13:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.358 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.358 ************************************ 00:06:06.358 END TEST accel_crc32c 00:06:06.358 ************************************ 00:06:06.358 13:35:08 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:06.358 13:35:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:06.358 13:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.358 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:06:06.358 ************************************ 00:06:06.358 START TEST accel_crc32c_C2 00:06:06.358 ************************************ 00:06:06.358 13:35:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:06.358 13:35:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.358 13:35:09 -- accel/accel.sh@17 -- # local accel_module 00:06:06.358 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.358 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.358 13:35:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:06.358 13:35:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:06.358 13:35:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.358 13:35:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.358 13:35:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.358 13:35:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.358 13:35:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.358 13:35:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.358 13:35:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.358 13:35:09 -- accel/accel.sh@41 -- # jq -r . 00:06:06.358 [2024-04-18 13:35:09.099585] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:06.358 [2024-04-18 13:35:09.099652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030415 ] 00:06:06.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.616 [2024-04-18 13:35:09.180077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.616 [2024-04-18 13:35:09.302930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=0x1 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=crc32c 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=0 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=software 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=32 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=32 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=1 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val=Yes 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:06.616 13:35:09 -- accel/accel.sh@20 -- # val= 00:06:06.616 13:35:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # IFS=: 00:06:06.616 13:35:09 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@20 -- # val= 00:06:07.986 13:35:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.986 13:35:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:07.986 13:35:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.986 00:06:07.986 real 0m1.516s 00:06:07.986 user 0m1.350s 00:06:07.986 sys 0m0.167s 00:06:07.986 13:35:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.986 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.986 ************************************ 00:06:07.986 END TEST accel_crc32c_C2 00:06:07.986 ************************************ 00:06:07.986 13:35:10 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:07.986 13:35:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.986 13:35:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.986 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.986 ************************************ 00:06:07.986 START TEST accel_copy 00:06:07.986 ************************************ 00:06:07.986 13:35:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:07.986 13:35:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.986 13:35:10 -- accel/accel.sh@17 -- # local accel_module 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 13:35:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:07.986 13:35:10 -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 13:35:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:07.986 13:35:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.986 13:35:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.986 13:35:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.986 13:35:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.986 13:35:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.986 13:35:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.986 13:35:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.986 13:35:10 -- accel/accel.sh@41 -- # jq -r . 00:06:07.986 [2024-04-18 13:35:10.744149] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:07.986 [2024-04-18 13:35:10.744213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030637 ] 00:06:07.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.244 [2024-04-18 13:35:10.822129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.244 [2024-04-18 13:35:10.945179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=0x1 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=copy 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=software 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=32 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=32 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val=1 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.244 13:35:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.244 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.244 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.245 13:35:11 -- accel/accel.sh@20 -- # val=Yes 00:06:08.245 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.245 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.245 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:08.245 13:35:11 -- accel/accel.sh@20 -- # val= 00:06:08.245 13:35:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # IFS=: 00:06:08.245 13:35:11 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.615 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.615 13:35:12 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:09.615 13:35:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.615 00:06:09.615 real 0m1.509s 00:06:09.615 user 0m1.348s 00:06:09.615 sys 0m0.162s 00:06:09.615 13:35:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.615 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:06:09.615 ************************************ 00:06:09.615 END TEST accel_copy 00:06:09.615 ************************************ 00:06:09.615 13:35:12 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.615 13:35:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:09.615 13:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.615 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:06:09.615 ************************************ 00:06:09.615 START TEST accel_fill 00:06:09.615 ************************************ 00:06:09.615 13:35:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.615 13:35:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.615 13:35:12 -- accel/accel.sh@17 -- # local accel_module 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.615 13:35:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.615 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.615 13:35:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.615 13:35:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.615 13:35:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.615 13:35:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.615 13:35:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.615 13:35:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.615 13:35:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.615 13:35:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.615 13:35:12 -- accel/accel.sh@41 -- # jq -r . 00:06:09.615 [2024-04-18 13:35:12.391597] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:09.615 [2024-04-18 13:35:12.391663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030800 ] 00:06:09.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.873 [2024-04-18 13:35:12.469811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.873 [2024-04-18 13:35:12.593397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val=0x1 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val=fill 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val=0x80 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.873 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.873 13:35:12 -- accel/accel.sh@20 -- # val=software 00:06:09.873 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val=64 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val=64 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val=1 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val=Yes 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 13:35:12 -- accel/accel.sh@20 -- # val= 00:06:09.874 13:35:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 13:35:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@20 -- # val= 00:06:11.244 13:35:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.244 13:35:13 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:11.244 13:35:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.244 00:06:11.244 real 0m1.506s 00:06:11.244 user 0m1.350s 00:06:11.244 sys 0m0.158s 00:06:11.244 13:35:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.244 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.244 ************************************ 00:06:11.244 END TEST accel_fill 00:06:11.244 ************************************ 00:06:11.244 13:35:13 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.244 13:35:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.244 13:35:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.244 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.244 ************************************ 00:06:11.244 START TEST accel_copy_crc32c 00:06:11.244 ************************************ 00:06:11.244 13:35:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.244 13:35:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.244 13:35:14 -- accel/accel.sh@17 -- # local accel_module 00:06:11.244 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.244 13:35:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.244 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.244 13:35:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.244 13:35:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.244 13:35:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.244 13:35:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.245 13:35:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.245 13:35:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.245 13:35:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.245 13:35:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.245 13:35:14 -- accel/accel.sh@41 -- # jq -r . 00:06:11.245 [2024-04-18 13:35:14.023013] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:11.245 [2024-04-18 13:35:14.023087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031091 ] 00:06:11.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.502 [2024-04-18 13:35:14.100466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.502 [2024-04-18 13:35:14.222929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=0x1 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=0 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=software 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=32 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=32 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=1 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val=Yes 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:11.502 13:35:14 -- accel/accel.sh@20 -- # val= 00:06:11.502 13:35:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # IFS=: 00:06:11.502 13:35:14 -- accel/accel.sh@19 -- # read -r var val 00:06:12.872 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.872 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.872 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.872 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.872 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.872 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.873 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.873 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.873 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:12.873 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.873 13:35:15 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.873 13:35:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.873 00:06:12.873 real 0m1.510s 00:06:12.873 user 0m1.344s 00:06:12.873 sys 0m0.168s 00:06:12.873 13:35:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.873 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:06:12.873 ************************************ 00:06:12.873 END TEST accel_copy_crc32c 00:06:12.873 ************************************ 00:06:12.873 13:35:15 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.873 13:35:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:12.873 13:35:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.873 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:06:12.873 ************************************ 00:06:12.873 START TEST accel_copy_crc32c_C2 00:06:12.873 ************************************ 00:06:12.873 13:35:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.873 13:35:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.873 13:35:15 -- accel/accel.sh@17 -- # local accel_module 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:12.873 13:35:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.873 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:12.873 13:35:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.873 13:35:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.873 13:35:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.873 13:35:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.873 13:35:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.873 13:35:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.873 13:35:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.873 13:35:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.873 13:35:15 -- accel/accel.sh@41 -- # jq -r . 00:06:12.873 [2024-04-18 13:35:15.661462] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:12.873 [2024-04-18 13:35:15.661526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031248 ] 00:06:13.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.131 [2024-04-18 13:35:15.738586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.131 [2024-04-18 13:35:15.861333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=0x1 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=0 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=software 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=32 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=32 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=1 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val=Yes 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.131 13:35:15 -- accel/accel.sh@20 -- # val= 00:06:13.131 13:35:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.131 13:35:15 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:14.503 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.503 13:35:17 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.503 13:35:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.503 00:06:14.503 real 0m1.506s 00:06:14.503 user 0m1.349s 00:06:14.503 sys 0m0.159s 00:06:14.503 13:35:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.503 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.503 ************************************ 00:06:14.503 END TEST accel_copy_crc32c_C2 00:06:14.503 ************************************ 00:06:14.503 13:35:17 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.503 13:35:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.503 13:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.503 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.503 ************************************ 00:06:14.503 START TEST accel_dualcast 00:06:14.503 ************************************ 00:06:14.503 13:35:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:14.503 13:35:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.503 13:35:17 -- accel/accel.sh@17 -- # local accel_module 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:14.503 13:35:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.503 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:14.503 13:35:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.503 13:35:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.503 13:35:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.503 13:35:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.503 13:35:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.503 13:35:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.503 13:35:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.503 13:35:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.503 13:35:17 -- accel/accel.sh@41 -- # jq -r . 00:06:14.762 [2024-04-18 13:35:17.311602] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:14.762 [2024-04-18 13:35:17.311668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031537 ] 00:06:14.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.762 [2024-04-18 13:35:17.388723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.762 [2024-04-18 13:35:17.509935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=0x1 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=dualcast 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=software 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=32 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=32 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=1 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val=Yes 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:15.020 13:35:17 -- accel/accel.sh@20 -- # val= 00:06:15.020 13:35:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # IFS=: 00:06:15.020 13:35:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.392 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.392 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.392 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.392 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.392 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.392 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.392 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.393 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.393 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.393 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.393 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.393 13:35:18 -- accel/accel.sh@20 -- # val= 00:06:16.393 13:35:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.393 13:35:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.393 13:35:18 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.393 13:35:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.393 00:06:16.393 real 0m1.505s 00:06:16.393 user 0m1.347s 00:06:16.393 sys 0m0.159s 00:06:16.393 13:35:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.393 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:06:16.393 ************************************ 00:06:16.393 END TEST accel_dualcast 00:06:16.393 ************************************ 00:06:16.393 13:35:18 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.393 13:35:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.393 13:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.393 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:06:16.393 ************************************ 00:06:16.393 START TEST accel_compare 00:06:16.393 ************************************ 00:06:16.393 13:35:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:16.393 13:35:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.393 13:35:18 -- accel/accel.sh@17 -- # local accel_module 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # IFS=: 00:06:16.393 13:35:18 -- accel/accel.sh@19 -- # read -r var val 00:06:16.393 13:35:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.393 13:35:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.393 13:35:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.393 13:35:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.393 13:35:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.393 13:35:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.393 13:35:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.393 13:35:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.393 13:35:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.393 13:35:18 -- accel/accel.sh@41 -- # jq -r . 00:06:16.393 [2024-04-18 13:35:18.972145] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:16.393 [2024-04-18 13:35:18.972219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031701 ] 00:06:16.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.393 [2024-04-18 13:35:19.055915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.393 [2024-04-18 13:35:19.176208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=0x1 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=compare 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=software 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=32 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=32 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=1 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val=Yes 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:16.650 13:35:19 -- accel/accel.sh@20 -- # val= 00:06:16.650 13:35:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # IFS=: 00:06:16.650 13:35:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.024 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.024 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.024 13:35:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.024 13:35:20 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:18.024 13:35:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.024 00:06:18.024 real 0m1.516s 00:06:18.024 user 0m1.354s 00:06:18.024 sys 0m0.164s 00:06:18.024 13:35:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.024 13:35:20 -- common/autotest_common.sh@10 -- # set +x 00:06:18.024 ************************************ 00:06:18.024 END TEST accel_compare 00:06:18.024 ************************************ 00:06:18.024 13:35:20 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.024 13:35:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.024 13:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.024 13:35:20 -- common/autotest_common.sh@10 -- # set +x 00:06:18.024 ************************************ 00:06:18.024 START TEST accel_xor 00:06:18.024 ************************************ 00:06:18.024 13:35:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:18.024 13:35:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.025 13:35:20 -- accel/accel.sh@17 -- # local accel_module 00:06:18.025 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.025 13:35:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:18.025 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.025 13:35:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.025 13:35:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.025 13:35:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.025 13:35:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.025 13:35:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.025 13:35:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.025 13:35:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.025 13:35:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.025 13:35:20 -- accel/accel.sh@41 -- # jq -r . 00:06:18.025 [2024-04-18 13:35:20.615342] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:18.025 [2024-04-18 13:35:20.615406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031934 ] 00:06:18.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.025 [2024-04-18 13:35:20.694911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.025 [2024-04-18 13:35:20.817271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=0x1 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=xor 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=2 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=software 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=32 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=32 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=1 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val=Yes 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:18.283 13:35:20 -- accel/accel.sh@20 -- # val= 00:06:18.283 13:35:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # IFS=: 00:06:18.283 13:35:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.682 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.682 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.682 13:35:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.682 13:35:22 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.682 13:35:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.682 00:06:19.682 real 0m1.499s 00:06:19.682 user 0m1.339s 00:06:19.683 sys 0m0.160s 00:06:19.683 13:35:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.683 13:35:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.683 ************************************ 00:06:19.683 END TEST accel_xor 00:06:19.683 ************************************ 00:06:19.683 13:35:22 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.683 13:35:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:19.683 13:35:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.683 13:35:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.683 ************************************ 00:06:19.683 START TEST accel_xor 00:06:19.683 ************************************ 00:06:19.683 13:35:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.683 13:35:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.683 13:35:22 -- accel/accel.sh@17 -- # local accel_module 00:06:19.683 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.683 13:35:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.683 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.683 13:35:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.683 13:35:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.683 13:35:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.683 13:35:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.683 13:35:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.683 13:35:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.683 13:35:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.683 13:35:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.683 13:35:22 -- accel/accel.sh@41 -- # jq -r . 00:06:19.683 [2024-04-18 13:35:22.261383] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:19.683 [2024-04-18 13:35:22.261446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032152 ] 00:06:19.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.683 [2024-04-18 13:35:22.340649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.683 [2024-04-18 13:35:22.463315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=0x1 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=xor 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=3 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=software 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=32 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=32 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=1 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val=Yes 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.940 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.940 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.940 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.941 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:19.941 13:35:22 -- accel/accel.sh@20 -- # val= 00:06:19.941 13:35:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.941 13:35:22 -- accel/accel.sh@19 -- # IFS=: 00:06:19.941 13:35:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@20 -- # val= 00:06:21.322 13:35:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.322 13:35:23 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.322 13:35:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.322 00:06:21.322 real 0m1.512s 00:06:21.322 user 0m1.354s 00:06:21.322 sys 0m0.160s 00:06:21.322 13:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.322 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.322 ************************************ 00:06:21.322 END TEST accel_xor 00:06:21.322 ************************************ 00:06:21.322 13:35:23 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:21.322 13:35:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.322 13:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.322 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.322 ************************************ 00:06:21.322 START TEST accel_dif_verify 00:06:21.322 ************************************ 00:06:21.322 13:35:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:21.322 13:35:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.322 13:35:23 -- accel/accel.sh@17 -- # local accel_module 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # IFS=: 00:06:21.322 13:35:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:21.322 13:35:23 -- accel/accel.sh@19 -- # read -r var val 00:06:21.322 13:35:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.322 13:35:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.322 13:35:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.322 13:35:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.322 13:35:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.322 13:35:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.322 13:35:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.322 13:35:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.322 13:35:23 -- accel/accel.sh@41 -- # jq -r . 00:06:21.322 [2024-04-18 13:35:23.928534] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:21.322 [2024-04-18 13:35:23.928610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032312 ] 00:06:21.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.322 [2024-04-18 13:35:24.014829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.580 [2024-04-18 13:35:24.137800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=0x1 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=dif_verify 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=software 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=32 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=32 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=1 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val=No 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:21.580 13:35:24 -- accel/accel.sh@20 -- # val= 00:06:21.580 13:35:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # IFS=: 00:06:21.580 13:35:24 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:22.953 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.953 13:35:25 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:22.953 13:35:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.953 00:06:22.953 real 0m1.518s 00:06:22.953 user 0m1.354s 00:06:22.953 sys 0m0.167s 00:06:22.953 13:35:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.953 13:35:25 -- common/autotest_common.sh@10 -- # set +x 00:06:22.953 ************************************ 00:06:22.953 END TEST accel_dif_verify 00:06:22.953 ************************************ 00:06:22.953 13:35:25 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:22.953 13:35:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:22.953 13:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.953 13:35:25 -- common/autotest_common.sh@10 -- # set +x 00:06:22.953 ************************************ 00:06:22.953 START TEST accel_dif_generate 00:06:22.953 ************************************ 00:06:22.953 13:35:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:22.953 13:35:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.953 13:35:25 -- accel/accel.sh@17 -- # local accel_module 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:22.953 13:35:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.953 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:22.953 13:35:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.953 13:35:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.953 13:35:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.953 13:35:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.953 13:35:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.953 13:35:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.953 13:35:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.953 13:35:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.953 13:35:25 -- accel/accel.sh@41 -- # jq -r . 00:06:22.953 [2024-04-18 13:35:25.587725] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:22.953 [2024-04-18 13:35:25.587791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032599 ] 00:06:22.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.953 [2024-04-18 13:35:25.664646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.211 [2024-04-18 13:35:25.788818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=0x1 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=dif_generate 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=software 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=32 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=32 00:06:23.211 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.211 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.211 13:35:25 -- accel/accel.sh@20 -- # val=1 00:06:23.212 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.212 13:35:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.212 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.212 13:35:25 -- accel/accel.sh@20 -- # val=No 00:06:23.212 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.212 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.212 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.212 13:35:25 -- accel/accel.sh@20 -- # val= 00:06:23.212 13:35:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.212 13:35:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.583 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.583 13:35:27 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:24.583 13:35:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.583 00:06:24.583 real 0m1.500s 00:06:24.583 user 0m1.347s 00:06:24.583 sys 0m0.157s 00:06:24.583 13:35:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.583 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.583 ************************************ 00:06:24.583 END TEST accel_dif_generate 00:06:24.583 ************************************ 00:06:24.583 13:35:27 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:24.583 13:35:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.583 13:35:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.583 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.583 ************************************ 00:06:24.583 START TEST accel_dif_generate_copy 00:06:24.583 ************************************ 00:06:24.583 13:35:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:24.583 13:35:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.583 13:35:27 -- accel/accel.sh@17 -- # local accel_module 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.583 13:35:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:24.583 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.583 13:35:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:24.583 13:35:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.583 13:35:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.583 13:35:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.583 13:35:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.583 13:35:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.583 13:35:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.583 13:35:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.583 13:35:27 -- accel/accel.sh@41 -- # jq -r . 00:06:24.583 [2024-04-18 13:35:27.259197] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:24.583 [2024-04-18 13:35:27.259260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032765 ] 00:06:24.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.583 [2024-04-18 13:35:27.349800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.841 [2024-04-18 13:35:27.474091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=0x1 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=software 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=32 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=32 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=1 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val=No 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:24.841 13:35:27 -- accel/accel.sh@20 -- # val= 00:06:24.841 13:35:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # IFS=: 00:06:24.841 13:35:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.211 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.211 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.211 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.211 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.211 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.211 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.211 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.211 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.211 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.212 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.212 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.212 13:35:28 -- accel/accel.sh@20 -- # val= 00:06:26.212 13:35:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.212 13:35:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.212 13:35:28 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:26.212 13:35:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.212 00:06:26.212 real 0m1.519s 00:06:26.212 user 0m1.348s 00:06:26.212 sys 0m0.172s 00:06:26.212 13:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.212 13:35:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.212 ************************************ 00:06:26.212 END TEST accel_dif_generate_copy 00:06:26.212 ************************************ 00:06:26.212 13:35:28 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:26.212 13:35:28 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:26.212 13:35:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:26.212 13:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.212 13:35:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.212 ************************************ 00:06:26.212 START TEST accel_comp 00:06:26.212 ************************************ 00:06:26.212 13:35:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:26.212 13:35:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.212 13:35:28 -- accel/accel.sh@17 -- # local accel_module 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # IFS=: 00:06:26.212 13:35:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:26.212 13:35:28 -- accel/accel.sh@19 -- # read -r var val 00:06:26.212 13:35:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:26.212 13:35:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.212 13:35:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.212 13:35:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.212 13:35:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.212 13:35:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.212 13:35:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.212 13:35:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.212 13:35:28 -- accel/accel.sh@41 -- # jq -r . 00:06:26.212 [2024-04-18 13:35:28.914507] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:26.212 [2024-04-18 13:35:28.914571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033041 ] 00:06:26.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.212 [2024-04-18 13:35:28.992462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.469 [2024-04-18 13:35:29.116848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=0x1 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=compress 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=software 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=32 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=32 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=1 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.469 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.469 13:35:29 -- accel/accel.sh@20 -- # val=No 00:06:26.469 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.470 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 13:35:29 -- accel/accel.sh@20 -- # val= 00:06:26.470 13:35:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 13:35:29 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.840 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.840 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.840 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.840 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.840 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.840 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.840 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:27.841 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.841 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.841 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.841 13:35:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.841 13:35:30 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:27.841 13:35:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.841 00:06:27.841 real 0m1.507s 00:06:27.841 user 0m1.350s 00:06:27.841 sys 0m0.159s 00:06:27.841 13:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.841 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.841 ************************************ 00:06:27.841 END TEST accel_comp 00:06:27.841 ************************************ 00:06:27.841 13:35:30 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.841 13:35:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.841 13:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.841 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.841 ************************************ 00:06:27.841 START TEST accel_decomp 00:06:27.841 ************************************ 00:06:27.841 13:35:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.841 13:35:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.841 13:35:30 -- accel/accel.sh@17 -- # local accel_module 00:06:27.841 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:27.841 13:35:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.841 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:27.841 13:35:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:27.841 13:35:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.841 13:35:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.841 13:35:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.841 13:35:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.841 13:35:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.841 13:35:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.841 13:35:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.841 13:35:30 -- accel/accel.sh@41 -- # jq -r . 00:06:27.841 [2024-04-18 13:35:30.554741] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:27.841 [2024-04-18 13:35:30.554809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033214 ] 00:06:27.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.841 [2024-04-18 13:35:30.635617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.098 [2024-04-18 13:35:30.757742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=0x1 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=decompress 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=software 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=32 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=32 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=1 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.098 13:35:30 -- accel/accel.sh@20 -- # val=Yes 00:06:28.098 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.098 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.099 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 13:35:30 -- accel/accel.sh@20 -- # val= 00:06:28.099 13:35:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 13:35:30 -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 13:35:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.468 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.468 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.468 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.468 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.468 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.469 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.469 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.469 13:35:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.469 13:35:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.469 13:35:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.469 00:06:29.469 real 0m1.517s 00:06:29.469 user 0m1.356s 00:06:29.469 sys 0m0.164s 00:06:29.469 13:35:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.469 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.469 ************************************ 00:06:29.469 END TEST accel_decomp 00:06:29.469 ************************************ 00:06:29.469 13:35:32 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.469 13:35:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:29.469 13:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.469 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.469 ************************************ 00:06:29.469 START TEST accel_decmop_full 00:06:29.469 ************************************ 00:06:29.469 13:35:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.469 13:35:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.469 13:35:32 -- accel/accel.sh@17 -- # local accel_module 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.469 13:35:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.469 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.469 13:35:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.469 13:35:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.469 13:35:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.469 13:35:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.469 13:35:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.469 13:35:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.469 13:35:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.469 13:35:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.469 13:35:32 -- accel/accel.sh@41 -- # jq -r . 00:06:29.469 [2024-04-18 13:35:32.195766] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:29.469 [2024-04-18 13:35:32.195831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033495 ] 00:06:29.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.726 [2024-04-18 13:35:32.278398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.726 [2024-04-18 13:35:32.401452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=0x1 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=decompress 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=software 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=32 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=32 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=1 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val=Yes 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:29.726 13:35:32 -- accel/accel.sh@20 -- # val= 00:06:29.726 13:35:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # IFS=: 00:06:29.726 13:35:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@20 -- # val= 00:06:31.097 13:35:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.097 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.097 13:35:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.097 13:35:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.097 13:35:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.097 00:06:31.097 real 0m1.527s 00:06:31.097 user 0m1.359s 00:06:31.097 sys 0m0.170s 00:06:31.097 13:35:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.097 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:31.097 ************************************ 00:06:31.097 END TEST accel_decmop_full 00:06:31.097 ************************************ 00:06:31.097 13:35:33 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.097 13:35:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:31.097 13:35:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.097 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:31.097 ************************************ 00:06:31.098 START TEST accel_decomp_mcore 00:06:31.098 ************************************ 00:06:31.098 13:35:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.098 13:35:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.098 13:35:33 -- accel/accel.sh@17 -- # local accel_module 00:06:31.098 13:35:33 -- accel/accel.sh@19 -- # IFS=: 00:06:31.098 13:35:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.098 13:35:33 -- accel/accel.sh@19 -- # read -r var val 00:06:31.098 13:35:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.098 13:35:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.098 13:35:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.098 13:35:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.098 13:35:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.098 13:35:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.098 13:35:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.098 13:35:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.098 13:35:33 -- accel/accel.sh@41 -- # jq -r . 00:06:31.098 [2024-04-18 13:35:33.873210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:31.098 [2024-04-18 13:35:33.873283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033657 ] 00:06:31.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.355 [2024-04-18 13:35:33.950478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.355 [2024-04-18 13:35:34.078626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.355 [2024-04-18 13:35:34.081960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.355 [2024-04-18 13:35:34.082007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.355 [2024-04-18 13:35:34.082012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.355 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.355 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.355 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.355 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.355 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.355 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.355 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.355 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.355 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.355 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.355 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=0xf 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=decompress 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=software 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=32 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=32 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=1 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val=Yes 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:31.356 13:35:34 -- accel/accel.sh@20 -- # val= 00:06:31.356 13:35:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # IFS=: 00:06:31.356 13:35:34 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:32.727 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.727 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.727 13:35:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.727 13:35:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.727 13:35:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.727 00:06:32.727 real 0m1.530s 00:06:32.727 user 0m4.856s 00:06:32.727 sys 0m0.170s 00:06:32.727 13:35:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.727 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:06:32.727 ************************************ 00:06:32.727 END TEST accel_decomp_mcore 00:06:32.727 ************************************ 00:06:32.727 13:35:35 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.727 13:35:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:32.727 13:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.727 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:06:32.985 ************************************ 00:06:32.985 START TEST accel_decomp_full_mcore 00:06:32.985 ************************************ 00:06:32.985 13:35:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.985 13:35:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.985 13:35:35 -- accel/accel.sh@17 -- # local accel_module 00:06:32.985 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:32.985 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:32.985 13:35:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.985 13:35:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.985 13:35:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.985 13:35:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.985 13:35:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.985 13:35:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.985 13:35:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.985 13:35:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.985 13:35:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.985 13:35:35 -- accel/accel.sh@41 -- # jq -r . 00:06:32.985 [2024-04-18 13:35:35.553175] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:32.985 [2024-04-18 13:35:35.553247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033919 ] 00:06:32.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.985 [2024-04-18 13:35:35.643038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.985 [2024-04-18 13:35:35.769197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.985 [2024-04-18 13:35:35.769252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.985 [2024-04-18 13:35:35.769307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.985 [2024-04-18 13:35:35.769310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=0xf 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=decompress 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=software 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=32 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=32 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=1 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val=Yes 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.243 13:35:35 -- accel/accel.sh@20 -- # val= 00:06:33.243 13:35:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.243 13:35:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.624 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.624 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.624 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.625 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.625 13:35:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.625 13:35:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.625 13:35:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.625 00:06:34.625 real 0m1.553s 00:06:34.625 user 0m4.922s 00:06:34.625 sys 0m0.176s 00:06:34.625 13:35:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.625 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:06:34.625 ************************************ 00:06:34.625 END TEST accel_decomp_full_mcore 00:06:34.625 ************************************ 00:06:34.625 13:35:37 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.625 13:35:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:34.625 13:35:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.625 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:06:34.625 ************************************ 00:06:34.625 START TEST accel_decomp_mthread 00:06:34.625 ************************************ 00:06:34.625 13:35:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.625 13:35:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.625 13:35:37 -- accel/accel.sh@17 -- # local accel_module 00:06:34.625 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.625 13:35:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.625 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.625 13:35:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.625 13:35:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.625 13:35:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.625 13:35:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.625 13:35:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.625 13:35:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.625 13:35:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.625 13:35:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.625 13:35:37 -- accel/accel.sh@41 -- # jq -r . 00:06:34.625 [2024-04-18 13:35:37.245907] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:34.625 [2024-04-18 13:35:37.246016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034113 ] 00:06:34.625 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.625 [2024-04-18 13:35:37.329200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.910 [2024-04-18 13:35:37.453768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.910 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.910 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.910 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=0x1 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=decompress 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=software 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=32 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=32 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=2 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val=Yes 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:34.911 13:35:37 -- accel/accel.sh@20 -- # val= 00:06:34.911 13:35:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # IFS=: 00:06:34.911 13:35:37 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@20 -- # val= 00:06:36.284 13:35:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.284 13:35:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.284 13:35:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.284 00:06:36.284 real 0m1.529s 00:06:36.284 user 0m1.355s 00:06:36.284 sys 0m0.176s 00:06:36.284 13:35:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.284 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.284 ************************************ 00:06:36.284 END TEST accel_decomp_mthread 00:06:36.284 ************************************ 00:06:36.284 13:35:38 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.284 13:35:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:36.284 13:35:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.284 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.284 ************************************ 00:06:36.284 START TEST accel_deomp_full_mthread 00:06:36.284 ************************************ 00:06:36.284 13:35:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.284 13:35:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.284 13:35:38 -- accel/accel.sh@17 -- # local accel_module 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # IFS=: 00:06:36.284 13:35:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.284 13:35:38 -- accel/accel.sh@19 -- # read -r var val 00:06:36.284 13:35:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.284 13:35:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.284 13:35:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.284 13:35:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.284 13:35:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.284 13:35:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.284 13:35:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.284 13:35:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.284 13:35:38 -- accel/accel.sh@41 -- # jq -r . 00:06:36.284 [2024-04-18 13:35:38.921292] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:36.284 [2024-04-18 13:35:38.921364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034314 ] 00:06:36.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.284 [2024-04-18 13:35:39.010303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.542 [2024-04-18 13:35:39.133882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=0x1 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=decompress 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=software 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=32 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=32 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=2 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val=Yes 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.542 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:36.542 13:35:39 -- accel/accel.sh@20 -- # val= 00:06:36.542 13:35:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.543 13:35:39 -- accel/accel.sh@19 -- # IFS=: 00:06:36.543 13:35:39 -- accel/accel.sh@19 -- # read -r var val 00:06:37.915 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.915 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.915 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@20 -- # val= 00:06:37.916 13:35:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # IFS=: 00:06:37.916 13:35:40 -- accel/accel.sh@19 -- # read -r var val 00:06:37.916 13:35:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.916 13:35:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.916 13:35:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.916 00:06:37.916 real 0m1.564s 00:06:37.916 user 0m1.391s 00:06:37.916 sys 0m0.174s 00:06:37.916 13:35:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.916 13:35:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 ************************************ 00:06:37.916 END TEST accel_deomp_full_mthread 00:06:37.916 ************************************ 00:06:37.916 13:35:40 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.916 13:35:40 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.916 13:35:40 -- accel/accel.sh@137 -- # build_accel_config 00:06:37.916 13:35:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:37.916 13:35:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.916 13:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.916 13:35:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.916 13:35:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.916 13:35:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.916 13:35:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.916 13:35:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.916 13:35:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.916 13:35:40 -- accel/accel.sh@41 -- # jq -r . 00:06:37.916 ************************************ 00:06:37.916 START TEST accel_dif_functional_tests 00:06:37.916 ************************************ 00:06:37.916 13:35:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.916 [2024-04-18 13:35:40.664007] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:37.916 [2024-04-18 13:35:40.664103] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034565 ] 00:06:37.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.174 [2024-04-18 13:35:40.750830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.174 [2024-04-18 13:35:40.875159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.174 [2024-04-18 13:35:40.875229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.174 [2024-04-18 13:35:40.875233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.431 00:06:38.431 00:06:38.431 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.431 http://cunit.sourceforge.net/ 00:06:38.431 00:06:38.431 00:06:38.431 Suite: accel_dif 00:06:38.431 Test: verify: DIF generated, GUARD check ...passed 00:06:38.431 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.431 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.431 Test: verify: DIF not generated, GUARD check ...[2024-04-18 13:35:40.982643] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.431 [2024-04-18 13:35:40.982708] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.431 passed 00:06:38.431 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 13:35:40.982752] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.431 [2024-04-18 13:35:40.982783] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.431 passed 00:06:38.431 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 13:35:40.982818] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.431 [2024-04-18 13:35:40.982848] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.431 passed 00:06:38.431 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.431 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 13:35:40.982919] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.431 passed 00:06:38.431 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.431 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.431 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.431 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 13:35:40.983087] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.431 passed 00:06:38.431 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.431 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.431 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.431 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.431 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.431 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.431 Test: generate copy: iovecs-len validate ...[2024-04-18 13:35:40.983348] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.431 passed 00:06:38.431 Test: generate copy: buffer alignment validate ...passed 00:06:38.431 00:06:38.431 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.431 suites 1 1 n/a 0 0 00:06:38.431 tests 20 20 20 0 0 00:06:38.431 asserts 204 204 204 0 n/a 00:06:38.431 00:06:38.431 Elapsed time = 0.003 seconds 00:06:38.688 00:06:38.688 real 0m0.642s 00:06:38.688 user 0m0.905s 00:06:38.688 sys 0m0.215s 00:06:38.688 13:35:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.688 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 ************************************ 00:06:38.688 END TEST accel_dif_functional_tests 00:06:38.688 ************************************ 00:06:38.688 00:06:38.688 real 0m37.068s 00:06:38.688 user 0m38.636s 00:06:38.688 sys 0m6.468s 00:06:38.688 13:35:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.688 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 ************************************ 00:06:38.688 END TEST accel 00:06:38.688 ************************************ 00:06:38.688 13:35:41 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.688 13:35:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.688 13:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.688 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.688 ************************************ 00:06:38.688 START TEST accel_rpc 00:06:38.688 ************************************ 00:06:38.688 13:35:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.688 * Looking for test storage... 00:06:38.688 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:38.688 13:35:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.689 13:35:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1034761 00:06:38.689 13:35:41 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.689 13:35:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 1034761 00:06:38.689 13:35:41 -- common/autotest_common.sh@817 -- # '[' -z 1034761 ']' 00:06:38.689 13:35:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.689 13:35:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.689 13:35:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.689 13:35:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.689 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.946 [2024-04-18 13:35:41.531186] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:38.946 [2024-04-18 13:35:41.531279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034761 ] 00:06:38.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.946 [2024-04-18 13:35:41.610630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.946 [2024-04-18 13:35:41.731902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.203 13:35:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.203 13:35:41 -- common/autotest_common.sh@850 -- # return 0 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:39.203 13:35:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.203 13:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.203 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.203 ************************************ 00:06:39.203 START TEST accel_assign_opcode 00:06:39.203 ************************************ 00:06:39.203 13:35:41 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:39.203 13:35:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:39.204 13:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.204 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.204 [2024-04-18 13:35:41.868718] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:39.204 13:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.204 13:35:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:39.204 13:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.204 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.204 [2024-04-18 13:35:41.876718] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:39.204 13:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.204 13:35:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:39.204 13:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.204 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 13:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.461 13:35:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.461 13:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.461 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 13:35:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.461 13:35:42 -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.461 13:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.461 software 00:06:39.461 00:06:39.461 real 0m0.330s 00:06:39.461 user 0m0.046s 00:06:39.461 sys 0m0.008s 00:06:39.461 13:35:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.461 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 ************************************ 00:06:39.461 END TEST accel_assign_opcode 00:06:39.461 ************************************ 00:06:39.461 13:35:42 -- accel/accel_rpc.sh@55 -- # killprocess 1034761 00:06:39.461 13:35:42 -- common/autotest_common.sh@936 -- # '[' -z 1034761 ']' 00:06:39.461 13:35:42 -- common/autotest_common.sh@940 -- # kill -0 1034761 00:06:39.461 13:35:42 -- common/autotest_common.sh@941 -- # uname 00:06:39.461 13:35:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.461 13:35:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1034761 00:06:39.461 13:35:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.461 13:35:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.461 13:35:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1034761' 00:06:39.461 killing process with pid 1034761 00:06:39.461 13:35:42 -- common/autotest_common.sh@955 -- # kill 1034761 00:06:39.461 13:35:42 -- common/autotest_common.sh@960 -- # wait 1034761 00:06:40.068 00:06:40.068 real 0m1.329s 00:06:40.068 user 0m1.277s 00:06:40.068 sys 0m0.527s 00:06:40.068 13:35:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.068 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.068 ************************************ 00:06:40.068 END TEST accel_rpc 00:06:40.068 ************************************ 00:06:40.068 13:35:42 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.068 13:35:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.068 13:35:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.068 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.325 ************************************ 00:06:40.325 START TEST app_cmdline 00:06:40.325 ************************************ 00:06:40.325 13:35:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.325 * Looking for test storage... 00:06:40.325 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:40.325 13:35:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.325 13:35:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1034982 00:06:40.325 13:35:42 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.325 13:35:42 -- app/cmdline.sh@18 -- # waitforlisten 1034982 00:06:40.325 13:35:42 -- common/autotest_common.sh@817 -- # '[' -z 1034982 ']' 00:06:40.325 13:35:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.325 13:35:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.325 13:35:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.325 13:35:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.325 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.325 [2024-04-18 13:35:43.006066] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:40.325 [2024-04-18 13:35:43.006163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034982 ] 00:06:40.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.325 [2024-04-18 13:35:43.090523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.581 [2024-04-18 13:35:43.214323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.839 13:35:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:40.839 13:35:43 -- common/autotest_common.sh@850 -- # return 0 00:06:40.839 13:35:43 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.096 { 00:06:41.096 "version": "SPDK v24.05-pre git sha1 65b4e17c6", 00:06:41.096 "fields": { 00:06:41.096 "major": 24, 00:06:41.096 "minor": 5, 00:06:41.096 "patch": 0, 00:06:41.096 "suffix": "-pre", 00:06:41.096 "commit": "65b4e17c6" 00:06:41.096 } 00:06:41.096 } 00:06:41.096 13:35:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.096 13:35:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.096 13:35:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.096 13:35:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.096 13:35:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.096 13:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.096 13:35:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.096 13:35:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.096 13:35:43 -- app/cmdline.sh@26 -- # sort 00:06:41.096 13:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.096 13:35:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.096 13:35:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.096 13:35:43 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.096 13:35:43 -- common/autotest_common.sh@638 -- # local es=0 00:06:41.096 13:35:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.096 13:35:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:41.096 13:35:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:41.096 13:35:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:41.096 13:35:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:41.096 13:35:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:41.096 13:35:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:41.096 13:35:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:41.096 13:35:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:41.096 13:35:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.353 request: 00:06:41.353 { 00:06:41.353 "method": "env_dpdk_get_mem_stats", 00:06:41.353 "req_id": 1 00:06:41.353 } 00:06:41.353 Got JSON-RPC error response 00:06:41.353 response: 00:06:41.353 { 00:06:41.353 "code": -32601, 00:06:41.353 "message": "Method not found" 00:06:41.353 } 00:06:41.353 13:35:44 -- common/autotest_common.sh@641 -- # es=1 00:06:41.353 13:35:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:41.353 13:35:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:41.353 13:35:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:41.353 13:35:44 -- app/cmdline.sh@1 -- # killprocess 1034982 00:06:41.353 13:35:44 -- common/autotest_common.sh@936 -- # '[' -z 1034982 ']' 00:06:41.353 13:35:44 -- common/autotest_common.sh@940 -- # kill -0 1034982 00:06:41.353 13:35:44 -- common/autotest_common.sh@941 -- # uname 00:06:41.353 13:35:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.353 13:35:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1034982 00:06:41.353 13:35:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.353 13:35:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.353 13:35:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1034982' 00:06:41.353 killing process with pid 1034982 00:06:41.353 13:35:44 -- common/autotest_common.sh@955 -- # kill 1034982 00:06:41.353 13:35:44 -- common/autotest_common.sh@960 -- # wait 1034982 00:06:41.917 00:06:41.917 real 0m1.755s 00:06:41.917 user 0m2.346s 00:06:41.917 sys 0m0.556s 00:06:41.917 13:35:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.917 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:06:41.917 ************************************ 00:06:41.917 END TEST app_cmdline 00:06:41.917 ************************************ 00:06:41.917 13:35:44 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:41.917 13:35:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.917 13:35:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.917 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.175 ************************************ 00:06:42.175 START TEST version 00:06:42.175 ************************************ 00:06:42.175 13:35:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:42.175 * Looking for test storage... 00:06:42.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:42.175 13:35:44 -- app/version.sh@17 -- # get_header_version major 00:06:42.175 13:35:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:42.175 13:35:44 -- app/version.sh@14 -- # cut -f2 00:06:42.175 13:35:44 -- app/version.sh@14 -- # tr -d '"' 00:06:42.175 13:35:44 -- app/version.sh@17 -- # major=24 00:06:42.175 13:35:44 -- app/version.sh@18 -- # get_header_version minor 00:06:42.175 13:35:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:42.175 13:35:44 -- app/version.sh@14 -- # cut -f2 00:06:42.175 13:35:44 -- app/version.sh@14 -- # tr -d '"' 00:06:42.175 13:35:44 -- app/version.sh@18 -- # minor=5 00:06:42.175 13:35:44 -- app/version.sh@19 -- # get_header_version patch 00:06:42.175 13:35:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:42.175 13:35:44 -- app/version.sh@14 -- # cut -f2 00:06:42.175 13:35:44 -- app/version.sh@14 -- # tr -d '"' 00:06:42.175 13:35:44 -- app/version.sh@19 -- # patch=0 00:06:42.175 13:35:44 -- app/version.sh@20 -- # get_header_version suffix 00:06:42.175 13:35:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:42.175 13:35:44 -- app/version.sh@14 -- # cut -f2 00:06:42.175 13:35:44 -- app/version.sh@14 -- # tr -d '"' 00:06:42.175 13:35:44 -- app/version.sh@20 -- # suffix=-pre 00:06:42.175 13:35:44 -- app/version.sh@22 -- # version=24.5 00:06:42.175 13:35:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.175 13:35:44 -- app/version.sh@28 -- # version=24.5rc0 00:06:42.175 13:35:44 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:42.175 13:35:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.175 13:35:44 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:42.175 13:35:44 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:42.175 00:06:42.175 real 0m0.119s 00:06:42.175 user 0m0.063s 00:06:42.175 sys 0m0.081s 00:06:42.175 13:35:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.175 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.175 ************************************ 00:06:42.175 END TEST version 00:06:42.175 ************************************ 00:06:42.175 13:35:44 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:42.175 13:35:44 -- spdk/autotest.sh@194 -- # uname -s 00:06:42.175 13:35:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:42.175 13:35:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:42.175 13:35:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:42.176 13:35:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:42.176 13:35:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:42.176 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.176 13:35:44 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:42.176 13:35:44 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:06:42.176 13:35:44 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:42.176 13:35:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.176 13:35:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.176 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 ************************************ 00:06:42.434 START TEST nvmf_rdma 00:06:42.434 ************************************ 00:06:42.434 13:35:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:42.434 * Looking for test storage... 00:06:42.434 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.434 13:35:45 -- nvmf/common.sh@7 -- # uname -s 00:06:42.434 13:35:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.434 13:35:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.434 13:35:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.434 13:35:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.434 13:35:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.434 13:35:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.434 13:35:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.434 13:35:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.434 13:35:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.434 13:35:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.434 13:35:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:06:42.434 13:35:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:06:42.434 13:35:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.434 13:35:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.434 13:35:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.434 13:35:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.434 13:35:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:42.434 13:35:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.434 13:35:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.434 13:35:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.434 13:35:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.434 13:35:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.434 13:35:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.434 13:35:45 -- paths/export.sh@5 -- # export PATH 00:06:42.434 13:35:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.434 13:35:45 -- nvmf/common.sh@47 -- # : 0 00:06:42.434 13:35:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.434 13:35:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.434 13:35:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.434 13:35:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.434 13:35:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.434 13:35:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.434 13:35:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.434 13:35:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:42.434 13:35:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.434 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:42.434 13:35:45 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:42.434 13:35:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.434 13:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.434 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.692 ************************************ 00:06:42.692 START TEST nvmf_example 00:06:42.692 ************************************ 00:06:42.692 13:35:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:42.692 * Looking for test storage... 00:06:42.692 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:42.692 13:35:45 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.692 13:35:45 -- nvmf/common.sh@7 -- # uname -s 00:06:42.692 13:35:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.692 13:35:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.692 13:35:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.693 13:35:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.693 13:35:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.693 13:35:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.693 13:35:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.693 13:35:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.693 13:35:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.693 13:35:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.693 13:35:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:06:42.693 13:35:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:06:42.693 13:35:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.693 13:35:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.693 13:35:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.693 13:35:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.693 13:35:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:42.693 13:35:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.693 13:35:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.693 13:35:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.693 13:35:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.693 13:35:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.693 13:35:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.693 13:35:45 -- paths/export.sh@5 -- # export PATH 00:06:42.693 13:35:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.693 13:35:45 -- nvmf/common.sh@47 -- # : 0 00:06:42.693 13:35:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.693 13:35:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.693 13:35:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.693 13:35:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.693 13:35:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.693 13:35:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.693 13:35:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.693 13:35:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.693 13:35:45 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:42.693 13:35:45 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:42.693 13:35:45 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:42.693 13:35:45 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:42.693 13:35:45 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:42.693 13:35:45 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:42.693 13:35:45 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:42.693 13:35:45 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:42.693 13:35:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.693 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.693 13:35:45 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:42.693 13:35:45 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:42.693 13:35:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.693 13:35:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:42.693 13:35:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:42.693 13:35:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:42.693 13:35:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.693 13:35:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.693 13:35:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.693 13:35:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:42.693 13:35:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:42.693 13:35:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.693 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:45.973 13:35:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:45.973 13:35:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.973 13:35:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.973 13:35:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.973 13:35:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.973 13:35:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.973 13:35:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.973 13:35:48 -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.973 13:35:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.973 13:35:48 -- nvmf/common.sh@296 -- # e810=() 00:06:45.973 13:35:48 -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.973 13:35:48 -- nvmf/common.sh@297 -- # x722=() 00:06:45.973 13:35:48 -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.973 13:35:48 -- nvmf/common.sh@298 -- # mlx=() 00:06:45.973 13:35:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.973 13:35:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.973 13:35:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.973 13:35:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:45.973 13:35:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:45.973 13:35:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:45.974 13:35:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:45.974 13:35:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:06:45.974 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:06:45.974 13:35:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.974 13:35:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:06:45.974 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:06:45.974 13:35:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.974 13:35:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.974 13:35:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.974 13:35:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:06:45.974 Found net devices under 0000:81:00.0: mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.974 13:35:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.974 13:35:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.974 13:35:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:06:45.974 Found net devices under 0000:81:00.1: mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.974 13:35:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:45.974 13:35:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:45.974 13:35:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:45.974 13:35:48 -- nvmf/common.sh@58 -- # uname 00:06:45.974 13:35:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:45.974 13:35:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:45.974 13:35:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:45.974 13:35:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:45.974 13:35:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:45.974 13:35:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:45.974 13:35:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:45.974 13:35:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:45.974 13:35:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:45.974 13:35:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:45.974 13:35:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:45.974 13:35:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.974 13:35:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:45.974 13:35:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:45.974 13:35:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.974 13:35:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@105 -- # continue 2 00:06:45.974 13:35:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@105 -- # continue 2 00:06:45.974 13:35:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:45.974 13:35:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.974 13:35:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:45.974 13:35:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:45.974 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.974 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:06:45.974 altname enp129s0f0np0 00:06:45.974 inet 192.168.100.8/24 scope global mlx_0_0 00:06:45.974 valid_lft forever preferred_lft forever 00:06:45.974 13:35:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:45.974 13:35:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.974 13:35:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:45.974 13:35:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:45.974 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.974 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:06:45.974 altname enp129s0f1np1 00:06:45.974 inet 192.168.100.9/24 scope global mlx_0_1 00:06:45.974 valid_lft forever preferred_lft forever 00:06:45.974 13:35:48 -- nvmf/common.sh@411 -- # return 0 00:06:45.974 13:35:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:45.974 13:35:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:45.974 13:35:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:45.974 13:35:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:45.974 13:35:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.974 13:35:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:45.974 13:35:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:45.974 13:35:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.974 13:35:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:45.974 13:35:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@105 -- # continue 2 00:06:45.974 13:35:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.974 13:35:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.974 13:35:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@105 -- # continue 2 00:06:45.974 13:35:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:45.974 13:35:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.974 13:35:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:45.974 13:35:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.974 13:35:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.974 13:35:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:45.974 192.168.100.9' 00:06:45.974 13:35:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:45.974 192.168.100.9' 00:06:45.974 13:35:48 -- nvmf/common.sh@446 -- # head -n 1 00:06:45.974 13:35:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:45.974 13:35:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:45.974 192.168.100.9' 00:06:45.974 13:35:48 -- nvmf/common.sh@447 -- # tail -n +2 00:06:45.974 13:35:48 -- nvmf/common.sh@447 -- # head -n 1 00:06:45.974 13:35:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:45.974 13:35:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:45.974 13:35:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:45.974 13:35:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:45.974 13:35:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:45.974 13:35:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:45.974 13:35:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:45.974 13:35:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:45.974 13:35:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.974 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.974 13:35:48 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:06:45.975 13:35:48 -- target/nvmf_example.sh@34 -- # nvmfpid=1037458 00:06:45.975 13:35:48 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:45.975 13:35:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.975 13:35:48 -- target/nvmf_example.sh@36 -- # waitforlisten 1037458 00:06:45.975 13:35:48 -- common/autotest_common.sh@817 -- # '[' -z 1037458 ']' 00:06:45.975 13:35:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.975 13:35:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.975 13:35:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.975 13:35:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.975 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.975 13:35:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:45.975 13:35:48 -- common/autotest_common.sh@850 -- # return 0 00:06:45.975 13:35:48 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:45.975 13:35:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.975 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.975 13:35:48 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:45.975 13:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.975 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.232 13:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.232 13:35:48 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:46.232 13:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.232 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.232 13:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.232 13:35:48 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:46.232 13:35:48 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:46.232 13:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.232 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.232 13:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.232 13:35:48 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:46.232 13:35:48 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:46.232 13:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.232 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.232 13:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.232 13:35:48 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:46.232 13:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.232 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.232 13:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.232 13:35:48 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:46.232 13:35:48 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:46.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.472 Initializing NVMe Controllers 00:06:58.472 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.472 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:58.472 Initialization complete. Launching workers. 00:06:58.472 ======================================================== 00:06:58.472 Latency(us) 00:06:58.472 Device Information : IOPS MiB/s Average min max 00:06:58.472 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19371.90 75.67 3303.31 947.88 12165.92 00:06:58.472 ======================================================== 00:06:58.472 Total : 19371.90 75.67 3303.31 947.88 12165.92 00:06:58.472 00:06:58.472 13:36:00 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:58.472 13:36:00 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:58.472 13:36:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:58.472 13:36:00 -- nvmf/common.sh@117 -- # sync 00:06:58.472 13:36:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:58.472 13:36:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:58.472 13:36:00 -- nvmf/common.sh@120 -- # set +e 00:06:58.472 13:36:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.472 13:36:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:58.472 rmmod nvme_rdma 00:06:58.472 rmmod nvme_fabrics 00:06:58.472 13:36:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.472 13:36:00 -- nvmf/common.sh@124 -- # set -e 00:06:58.472 13:36:00 -- nvmf/common.sh@125 -- # return 0 00:06:58.472 13:36:00 -- nvmf/common.sh@478 -- # '[' -n 1037458 ']' 00:06:58.472 13:36:00 -- nvmf/common.sh@479 -- # killprocess 1037458 00:06:58.472 13:36:00 -- common/autotest_common.sh@936 -- # '[' -z 1037458 ']' 00:06:58.472 13:36:00 -- common/autotest_common.sh@940 -- # kill -0 1037458 00:06:58.472 13:36:00 -- common/autotest_common.sh@941 -- # uname 00:06:58.472 13:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.472 13:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1037458 00:06:58.472 13:36:00 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:58.472 13:36:00 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:58.472 13:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1037458' 00:06:58.472 killing process with pid 1037458 00:06:58.472 13:36:00 -- common/autotest_common.sh@955 -- # kill 1037458 00:06:58.472 13:36:00 -- common/autotest_common.sh@960 -- # wait 1037458 00:06:58.472 nvmf threads initialize successfully 00:06:58.472 bdev subsystem init successfully 00:06:58.472 created a nvmf target service 00:06:58.472 create targets's poll groups done 00:06:58.472 all subsystems of target started 00:06:58.472 nvmf target is running 00:06:58.472 all subsystems of target stopped 00:06:58.472 destroy targets's poll groups done 00:06:58.472 destroyed the nvmf target service 00:06:58.472 bdev subsystem finish successfully 00:06:58.472 nvmf threads destroy successfully 00:06:58.472 13:36:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:58.472 13:36:00 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:06:58.472 13:36:00 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:58.472 13:36:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:58.472 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.472 00:06:58.472 real 0m15.411s 00:06:58.472 user 0m49.363s 00:06:58.472 sys 0m2.619s 00:06:58.472 13:36:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.473 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 ************************************ 00:06:58.473 END TEST nvmf_example 00:06:58.473 ************************************ 00:06:58.473 13:36:00 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:58.473 13:36:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:58.473 13:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.473 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 ************************************ 00:06:58.473 START TEST nvmf_filesystem 00:06:58.473 ************************************ 00:06:58.473 13:36:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:58.473 * Looking for test storage... 00:06:58.473 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.473 13:36:00 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:06:58.473 13:36:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:58.473 13:36:00 -- common/autotest_common.sh@34 -- # set -e 00:06:58.473 13:36:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:58.473 13:36:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:58.473 13:36:00 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:06:58.473 13:36:00 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:58.473 13:36:00 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:06:58.473 13:36:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.473 13:36:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.473 13:36:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.473 13:36:00 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:58.473 13:36:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.473 13:36:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.473 13:36:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.473 13:36:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.473 13:36:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.473 13:36:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.473 13:36:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.473 13:36:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.473 13:36:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.473 13:36:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.473 13:36:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.473 13:36:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.473 13:36:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:06:58.473 13:36:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.473 13:36:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.473 13:36:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.473 13:36:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.473 13:36:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.473 13:36:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.473 13:36:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.473 13:36:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.473 13:36:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.473 13:36:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.473 13:36:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.473 13:36:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.473 13:36:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.473 13:36:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.473 13:36:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.473 13:36:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:06:58.473 13:36:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.473 13:36:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.473 13:36:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.473 13:36:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.473 13:36:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.473 13:36:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.473 13:36:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.473 13:36:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:58.473 13:36:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:58.473 13:36:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.473 13:36:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:58.473 13:36:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:58.473 13:36:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:58.473 13:36:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:58.473 13:36:00 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.473 13:36:00 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:58.473 13:36:00 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:58.473 13:36:00 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.473 13:36:00 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:58.473 13:36:00 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:58.473 13:36:00 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:58.473 13:36:00 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:58.473 13:36:00 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:58.473 13:36:00 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:58.473 13:36:00 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:58.473 13:36:00 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:58.473 13:36:00 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:58.473 13:36:00 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.473 13:36:00 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:58.473 13:36:00 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:58.473 13:36:00 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:58.473 13:36:00 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:58.473 13:36:00 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:58.473 13:36:00 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:58.473 13:36:00 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.473 13:36:00 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:58.473 13:36:00 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:58.473 13:36:00 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:58.473 13:36:00 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:58.474 13:36:00 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.474 13:36:00 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:58.474 13:36:00 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:58.474 13:36:00 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:06:58.474 13:36:00 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:06:58.474 13:36:00 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:06:58.474 13:36:00 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:06:58.474 13:36:00 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:58.474 13:36:00 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:58.474 13:36:00 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:58.474 13:36:00 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:58.474 13:36:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:58.474 13:36:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:58.474 13:36:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:58.474 13:36:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:58.474 13:36:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:58.474 13:36:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:58.474 13:36:00 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:06:58.474 13:36:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:58.474 #define SPDK_CONFIG_H 00:06:58.474 #define SPDK_CONFIG_APPS 1 00:06:58.474 #define SPDK_CONFIG_ARCH native 00:06:58.474 #undef SPDK_CONFIG_ASAN 00:06:58.474 #undef SPDK_CONFIG_AVAHI 00:06:58.474 #undef SPDK_CONFIG_CET 00:06:58.474 #define SPDK_CONFIG_COVERAGE 1 00:06:58.474 #define SPDK_CONFIG_CROSS_PREFIX 00:06:58.474 #undef SPDK_CONFIG_CRYPTO 00:06:58.474 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:58.474 #undef SPDK_CONFIG_CUSTOMOCF 00:06:58.474 #undef SPDK_CONFIG_DAOS 00:06:58.474 #define SPDK_CONFIG_DAOS_DIR 00:06:58.474 #define SPDK_CONFIG_DEBUG 1 00:06:58.474 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:58.474 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:06:58.474 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:58.474 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:58.474 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:58.474 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:06:58.474 #define SPDK_CONFIG_EXAMPLES 1 00:06:58.474 #undef SPDK_CONFIG_FC 00:06:58.474 #define SPDK_CONFIG_FC_PATH 00:06:58.474 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:58.474 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:58.474 #undef SPDK_CONFIG_FUSE 00:06:58.474 #undef SPDK_CONFIG_FUZZER 00:06:58.474 #define SPDK_CONFIG_FUZZER_LIB 00:06:58.474 #undef SPDK_CONFIG_GOLANG 00:06:58.474 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:58.474 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:58.474 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:58.474 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:58.474 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:58.474 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:58.474 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:58.474 #define SPDK_CONFIG_IDXD 1 00:06:58.474 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:58.474 #undef SPDK_CONFIG_IPSEC_MB 00:06:58.474 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:58.474 #define SPDK_CONFIG_ISAL 1 00:06:58.474 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:58.474 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:58.474 #define SPDK_CONFIG_LIBDIR 00:06:58.474 #undef SPDK_CONFIG_LTO 00:06:58.474 #define SPDK_CONFIG_MAX_LCORES 00:06:58.474 #define SPDK_CONFIG_NVME_CUSE 1 00:06:58.474 #undef SPDK_CONFIG_OCF 00:06:58.474 #define SPDK_CONFIG_OCF_PATH 00:06:58.474 #define SPDK_CONFIG_OPENSSL_PATH 00:06:58.474 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:58.474 #define SPDK_CONFIG_PGO_DIR 00:06:58.474 #undef SPDK_CONFIG_PGO_USE 00:06:58.474 #define SPDK_CONFIG_PREFIX /usr/local 00:06:58.474 #undef SPDK_CONFIG_RAID5F 00:06:58.474 #undef SPDK_CONFIG_RBD 00:06:58.474 #define SPDK_CONFIG_RDMA 1 00:06:58.474 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:58.474 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:58.474 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:58.474 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:58.474 #define SPDK_CONFIG_SHARED 1 00:06:58.474 #undef SPDK_CONFIG_SMA 00:06:58.474 #define SPDK_CONFIG_TESTS 1 00:06:58.474 #undef SPDK_CONFIG_TSAN 00:06:58.474 #define SPDK_CONFIG_UBLK 1 00:06:58.474 #define SPDK_CONFIG_UBSAN 1 00:06:58.474 #undef SPDK_CONFIG_UNIT_TESTS 00:06:58.474 #undef SPDK_CONFIG_URING 00:06:58.474 #define SPDK_CONFIG_URING_PATH 00:06:58.474 #undef SPDK_CONFIG_URING_ZNS 00:06:58.474 #undef SPDK_CONFIG_USDT 00:06:58.474 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:58.474 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:58.474 #undef SPDK_CONFIG_VFIO_USER 00:06:58.474 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:58.474 #define SPDK_CONFIG_VHOST 1 00:06:58.474 #define SPDK_CONFIG_VIRTIO 1 00:06:58.474 #undef SPDK_CONFIG_VTUNE 00:06:58.474 #define SPDK_CONFIG_VTUNE_DIR 00:06:58.474 #define SPDK_CONFIG_WERROR 1 00:06:58.474 #define SPDK_CONFIG_WPDK_DIR 00:06:58.474 #undef SPDK_CONFIG_XNVME 00:06:58.474 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:58.474 13:36:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:58.474 13:36:00 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:58.474 13:36:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.474 13:36:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.474 13:36:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.474 13:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.474 13:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.474 13:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.474 13:36:00 -- paths/export.sh@5 -- # export PATH 00:06:58.474 13:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.475 13:36:00 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.475 13:36:00 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.475 13:36:00 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:06:58.475 13:36:00 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:06:58.475 13:36:00 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:58.475 13:36:00 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:58.475 13:36:00 -- pm/common@67 -- # TEST_TAG=N/A 00:06:58.475 13:36:00 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:06:58.475 13:36:00 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:06:58.475 13:36:00 -- pm/common@71 -- # uname -s 00:06:58.475 13:36:00 -- pm/common@71 -- # PM_OS=Linux 00:06:58.475 13:36:00 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:58.475 13:36:00 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:58.475 13:36:00 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:58.475 13:36:00 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:58.475 13:36:00 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:58.475 13:36:00 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:58.475 13:36:00 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:58.475 13:36:00 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:58.475 13:36:00 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:58.475 13:36:00 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:06:58.475 13:36:00 -- common/autotest_common.sh@57 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:58.475 13:36:00 -- common/autotest_common.sh@61 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:58.475 13:36:00 -- common/autotest_common.sh@63 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:58.475 13:36:00 -- common/autotest_common.sh@65 -- # : 1 00:06:58.475 13:36:00 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:58.475 13:36:00 -- common/autotest_common.sh@67 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:58.475 13:36:00 -- common/autotest_common.sh@69 -- # : 00:06:58.475 13:36:00 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:58.475 13:36:00 -- common/autotest_common.sh@71 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:58.475 13:36:00 -- common/autotest_common.sh@73 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:58.475 13:36:00 -- common/autotest_common.sh@75 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:58.475 13:36:00 -- common/autotest_common.sh@77 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:58.475 13:36:00 -- common/autotest_common.sh@79 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:58.475 13:36:00 -- common/autotest_common.sh@81 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:58.475 13:36:00 -- common/autotest_common.sh@83 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:58.475 13:36:00 -- common/autotest_common.sh@85 -- # : 1 00:06:58.475 13:36:00 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:58.475 13:36:00 -- common/autotest_common.sh@87 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:58.475 13:36:00 -- common/autotest_common.sh@89 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:58.475 13:36:00 -- common/autotest_common.sh@91 -- # : 1 00:06:58.475 13:36:00 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:58.475 13:36:00 -- common/autotest_common.sh@93 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:58.475 13:36:00 -- common/autotest_common.sh@95 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:58.475 13:36:00 -- common/autotest_common.sh@97 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:58.475 13:36:00 -- common/autotest_common.sh@99 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:58.475 13:36:00 -- common/autotest_common.sh@101 -- # : rdma 00:06:58.475 13:36:00 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:58.475 13:36:00 -- common/autotest_common.sh@103 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:58.475 13:36:00 -- common/autotest_common.sh@105 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:58.475 13:36:00 -- common/autotest_common.sh@107 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:58.475 13:36:00 -- common/autotest_common.sh@109 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:58.475 13:36:00 -- common/autotest_common.sh@111 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:58.475 13:36:00 -- common/autotest_common.sh@113 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:58.475 13:36:00 -- common/autotest_common.sh@115 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:58.475 13:36:00 -- common/autotest_common.sh@117 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:58.475 13:36:00 -- common/autotest_common.sh@119 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:58.475 13:36:00 -- common/autotest_common.sh@121 -- # : 1 00:06:58.475 13:36:00 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:58.475 13:36:00 -- common/autotest_common.sh@123 -- # : 00:06:58.475 13:36:00 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:58.475 13:36:00 -- common/autotest_common.sh@125 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:58.475 13:36:00 -- common/autotest_common.sh@127 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:58.475 13:36:00 -- common/autotest_common.sh@129 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:58.475 13:36:00 -- common/autotest_common.sh@131 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:58.475 13:36:00 -- common/autotest_common.sh@133 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:58.475 13:36:00 -- common/autotest_common.sh@135 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:58.475 13:36:00 -- common/autotest_common.sh@137 -- # : 00:06:58.475 13:36:00 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:58.475 13:36:00 -- common/autotest_common.sh@139 -- # : true 00:06:58.475 13:36:00 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:58.475 13:36:00 -- common/autotest_common.sh@141 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:58.475 13:36:00 -- common/autotest_common.sh@143 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:58.475 13:36:00 -- common/autotest_common.sh@145 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:58.475 13:36:00 -- common/autotest_common.sh@147 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:58.475 13:36:00 -- common/autotest_common.sh@149 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:58.475 13:36:00 -- common/autotest_common.sh@151 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:58.475 13:36:00 -- common/autotest_common.sh@153 -- # : mlx5 00:06:58.475 13:36:00 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:58.475 13:36:00 -- common/autotest_common.sh@155 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:58.475 13:36:00 -- common/autotest_common.sh@157 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:58.475 13:36:00 -- common/autotest_common.sh@159 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:58.475 13:36:00 -- common/autotest_common.sh@161 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:58.475 13:36:00 -- common/autotest_common.sh@163 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:58.475 13:36:00 -- common/autotest_common.sh@166 -- # : 00:06:58.475 13:36:00 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:58.475 13:36:00 -- common/autotest_common.sh@168 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:58.475 13:36:00 -- common/autotest_common.sh@170 -- # : 0 00:06:58.475 13:36:00 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:58.475 13:36:00 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:06:58.475 13:36:00 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.476 13:36:00 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:58.476 13:36:00 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:58.476 13:36:00 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:58.476 13:36:00 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:58.476 13:36:00 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.476 13:36:00 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.476 13:36:00 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.476 13:36:00 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.476 13:36:00 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:58.476 13:36:00 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:58.476 13:36:00 -- common/autotest_common.sh@199 -- # cat 00:06:58.476 13:36:00 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:58.476 13:36:00 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.476 13:36:00 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.476 13:36:00 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.476 13:36:00 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.476 13:36:00 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:58.476 13:36:00 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:58.476 13:36:00 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:58.476 13:36:00 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:58.476 13:36:00 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:58.476 13:36:00 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:58.476 13:36:00 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.476 13:36:00 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.476 13:36:00 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.476 13:36:00 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.476 13:36:00 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.476 13:36:00 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.476 13:36:00 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:58.476 13:36:00 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:58.476 13:36:00 -- common/autotest_common.sh@252 -- # valgrind= 00:06:58.476 13:36:00 -- common/autotest_common.sh@258 -- # uname -s 00:06:58.476 13:36:00 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:58.476 13:36:00 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:58.476 13:36:00 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:58.476 13:36:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.476 13:36:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.476 13:36:00 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:58.476 13:36:00 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:06:58.476 13:36:00 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:58.476 13:36:00 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:58.476 13:36:00 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:58.476 13:36:00 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:58.476 13:36:00 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:58.476 13:36:00 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:58.476 13:36:00 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:06:58.476 13:36:00 -- common/autotest_common.sh@307 -- # [[ -z 1039048 ]] 00:06:58.476 13:36:00 -- common/autotest_common.sh@307 -- # kill -0 1039048 00:06:58.476 13:36:00 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:58.476 13:36:00 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:58.476 13:36:00 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:58.477 13:36:00 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:58.477 13:36:00 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:58.477 13:36:00 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:58.477 13:36:00 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:58.477 13:36:00 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:58.477 13:36:00 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.bcOvtt 00:06:58.477 13:36:00 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:58.477 13:36:00 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:58.477 13:36:00 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:58.477 13:36:00 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bcOvtt/tests/target /tmp/spdk.bcOvtt 00:06:58.477 13:36:01 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@316 -- # df -T 00:06:58.477 13:36:01 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=995188736 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=4289241088 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=51493273600 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994586112 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=10501312512 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=30943752192 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997291008 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=12389851136 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398919680 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=9068544 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996877312 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997295104 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=417792 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199451648 00:06:58.477 13:36:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199455744 00:06:58.477 13:36:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:58.477 13:36:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.477 13:36:01 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:58.477 * Looking for test storage... 00:06:58.477 13:36:01 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:58.477 13:36:01 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:58.477 13:36:01 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.477 13:36:01 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:58.477 13:36:01 -- common/autotest_common.sh@361 -- # mount=/ 00:06:58.477 13:36:01 -- common/autotest_common.sh@363 -- # target_space=51493273600 00:06:58.477 13:36:01 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:58.477 13:36:01 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:58.477 13:36:01 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:58.477 13:36:01 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:58.477 13:36:01 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:58.477 13:36:01 -- common/autotest_common.sh@370 -- # new_size=12715905024 00:06:58.477 13:36:01 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:58.477 13:36:01 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.477 13:36:01 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.477 13:36:01 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.477 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:58.477 13:36:01 -- common/autotest_common.sh@378 -- # return 0 00:06:58.477 13:36:01 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:58.477 13:36:01 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:58.477 13:36:01 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:58.477 13:36:01 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:58.477 13:36:01 -- common/autotest_common.sh@1673 -- # true 00:06:58.477 13:36:01 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:58.477 13:36:01 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:58.477 13:36:01 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:58.477 13:36:01 -- common/autotest_common.sh@27 -- # exec 00:06:58.477 13:36:01 -- common/autotest_common.sh@29 -- # exec 00:06:58.477 13:36:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:58.477 13:36:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:58.477 13:36:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:58.477 13:36:01 -- common/autotest_common.sh@18 -- # set -x 00:06:58.477 13:36:01 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.477 13:36:01 -- nvmf/common.sh@7 -- # uname -s 00:06:58.477 13:36:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.477 13:36:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.477 13:36:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.477 13:36:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.477 13:36:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.477 13:36:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.477 13:36:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.477 13:36:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.477 13:36:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.477 13:36:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.477 13:36:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:06:58.477 13:36:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:06:58.477 13:36:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.477 13:36:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.477 13:36:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.477 13:36:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.477 13:36:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:58.477 13:36:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.477 13:36:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.477 13:36:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.477 13:36:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.477 13:36:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.477 13:36:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.477 13:36:01 -- paths/export.sh@5 -- # export PATH 00:06:58.477 13:36:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.477 13:36:01 -- nvmf/common.sh@47 -- # : 0 00:06:58.477 13:36:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.477 13:36:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.477 13:36:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.477 13:36:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.477 13:36:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.477 13:36:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.477 13:36:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.477 13:36:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.477 13:36:01 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:58.477 13:36:01 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:58.477 13:36:01 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:58.477 13:36:01 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:58.477 13:36:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.477 13:36:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:58.477 13:36:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:58.477 13:36:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:58.477 13:36:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.477 13:36:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.477 13:36:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.477 13:36:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:58.477 13:36:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:58.477 13:36:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.477 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:07:01.004 13:36:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:01.004 13:36:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.004 13:36:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.004 13:36:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.004 13:36:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.004 13:36:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.004 13:36:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.004 13:36:03 -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.004 13:36:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.004 13:36:03 -- nvmf/common.sh@296 -- # e810=() 00:07:01.004 13:36:03 -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.004 13:36:03 -- nvmf/common.sh@297 -- # x722=() 00:07:01.004 13:36:03 -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.004 13:36:03 -- nvmf/common.sh@298 -- # mlx=() 00:07:01.004 13:36:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.004 13:36:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.004 13:36:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.004 13:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.004 13:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:01.004 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:01.004 13:36:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:01.004 13:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.004 13:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:01.004 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:01.004 13:36:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:01.004 13:36:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.004 13:36:03 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.004 13:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.004 13:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.004 13:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.004 13:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:01.004 Found net devices under 0000:81:00.0: mlx_0_0 00:07:01.004 13:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.004 13:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.004 13:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.004 13:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.004 13:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:01.004 Found net devices under 0000:81:00.1: mlx_0_1 00:07:01.004 13:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.004 13:36:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:01.004 13:36:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:01.004 13:36:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:01.004 13:36:03 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:01.004 13:36:03 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:01.004 13:36:03 -- nvmf/common.sh@58 -- # uname 00:07:01.004 13:36:03 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:01.004 13:36:03 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:01.004 13:36:03 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:01.004 13:36:03 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:01.004 13:36:03 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:01.004 13:36:03 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:01.004 13:36:03 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:01.004 13:36:03 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:01.004 13:36:03 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:01.004 13:36:03 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:01.004 13:36:03 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:01.004 13:36:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:01.004 13:36:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:01.262 13:36:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:01.262 13:36:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:01.262 13:36:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:01.262 13:36:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:01.262 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.262 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:01.262 13:36:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:01.262 13:36:03 -- nvmf/common.sh@105 -- # continue 2 00:07:01.262 13:36:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:01.262 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.262 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:01.262 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.262 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:01.262 13:36:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:01.262 13:36:03 -- nvmf/common.sh@105 -- # continue 2 00:07:01.262 13:36:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:01.262 13:36:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:01.262 13:36:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:01.262 13:36:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:01.262 13:36:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:01.262 13:36:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:01.262 13:36:03 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:01.263 13:36:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:01.263 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:01.263 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:01.263 altname enp129s0f0np0 00:07:01.263 inet 192.168.100.8/24 scope global mlx_0_0 00:07:01.263 valid_lft forever preferred_lft forever 00:07:01.263 13:36:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:01.263 13:36:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:01.263 13:36:03 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:01.263 13:36:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:01.263 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:01.263 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:01.263 altname enp129s0f1np1 00:07:01.263 inet 192.168.100.9/24 scope global mlx_0_1 00:07:01.263 valid_lft forever preferred_lft forever 00:07:01.263 13:36:03 -- nvmf/common.sh@411 -- # return 0 00:07:01.263 13:36:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:01.263 13:36:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:01.263 13:36:03 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:01.263 13:36:03 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:01.263 13:36:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:01.263 13:36:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:01.263 13:36:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:01.263 13:36:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:01.263 13:36:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:01.263 13:36:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:01.263 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.263 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:01.263 13:36:03 -- nvmf/common.sh@105 -- # continue 2 00:07:01.263 13:36:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:01.263 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.263 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:01.263 13:36:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:01.263 13:36:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@105 -- # continue 2 00:07:01.263 13:36:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:01.263 13:36:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:01.263 13:36:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:01.263 13:36:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:01.263 13:36:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:01.263 13:36:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:01.263 13:36:03 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:01.263 192.168.100.9' 00:07:01.263 13:36:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:01.263 192.168.100.9' 00:07:01.263 13:36:03 -- nvmf/common.sh@446 -- # head -n 1 00:07:01.263 13:36:03 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:01.263 13:36:03 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:01.263 192.168.100.9' 00:07:01.263 13:36:03 -- nvmf/common.sh@447 -- # tail -n +2 00:07:01.263 13:36:03 -- nvmf/common.sh@447 -- # head -n 1 00:07:01.263 13:36:03 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:01.263 13:36:03 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:01.263 13:36:03 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:01.263 13:36:03 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:01.263 13:36:03 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:01.263 13:36:03 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:01.263 13:36:03 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:01.263 13:36:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:01.263 13:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.263 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:07:01.263 ************************************ 00:07:01.263 START TEST nvmf_filesystem_no_in_capsule 00:07:01.263 ************************************ 00:07:01.263 13:36:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:01.263 13:36:04 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:01.263 13:36:04 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:01.263 13:36:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:01.263 13:36:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:01.263 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:07:01.263 13:36:04 -- nvmf/common.sh@470 -- # nvmfpid=1041052 00:07:01.263 13:36:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.263 13:36:04 -- nvmf/common.sh@471 -- # waitforlisten 1041052 00:07:01.263 13:36:04 -- common/autotest_common.sh@817 -- # '[' -z 1041052 ']' 00:07:01.263 13:36:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.263 13:36:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.263 13:36:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.263 13:36:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.263 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:07:01.521 [2024-04-18 13:36:04.071956] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:01.521 [2024-04-18 13:36:04.072050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.521 [2024-04-18 13:36:04.147079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.521 [2024-04-18 13:36:04.273097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.521 [2024-04-18 13:36:04.273173] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.521 [2024-04-18 13:36:04.273190] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.521 [2024-04-18 13:36:04.273204] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.521 [2024-04-18 13:36:04.273216] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.521 [2024-04-18 13:36:04.273307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.521 [2024-04-18 13:36:04.273360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.521 [2024-04-18 13:36:04.273387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.521 [2024-04-18 13:36:04.273391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.453 13:36:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.453 13:36:05 -- common/autotest_common.sh@850 -- # return 0 00:07:02.453 13:36:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:02.453 13:36:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:02.453 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.453 13:36:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.453 13:36:05 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:02.453 13:36:05 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:02.453 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.453 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.453 [2024-04-18 13:36:05.155533] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:02.453 [2024-04-18 13:36:05.181897] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc8090/0xdcc580) succeed. 00:07:02.453 [2024-04-18 13:36:05.194305] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc9680/0xe0dc10) succeed. 00:07:02.712 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.712 13:36:05 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.712 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.712 Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.712 13:36:05 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:02.712 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.712 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.712 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.712 13:36:05 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.712 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.712 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.712 13:36:05 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:02.712 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.712 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.712 [2024-04-18 13:36:05.507198] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:02.712 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.712 13:36:05 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:02.712 13:36:05 -- common/autotest_common.sh@1366 -- # local bs 00:07:02.712 13:36:05 -- common/autotest_common.sh@1367 -- # local nb 00:07:02.712 13:36:05 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:02.712 13:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.712 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 13:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.970 13:36:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:02.970 { 00:07:02.970 "name": "Malloc1", 00:07:02.970 "aliases": [ 00:07:02.970 "1b65fafa-fdf3-4042-bd78-190e6bca920f" 00:07:02.970 ], 00:07:02.970 "product_name": "Malloc disk", 00:07:02.970 "block_size": 512, 00:07:02.970 "num_blocks": 1048576, 00:07:02.970 "uuid": "1b65fafa-fdf3-4042-bd78-190e6bca920f", 00:07:02.970 "assigned_rate_limits": { 00:07:02.970 "rw_ios_per_sec": 0, 00:07:02.970 "rw_mbytes_per_sec": 0, 00:07:02.970 "r_mbytes_per_sec": 0, 00:07:02.970 "w_mbytes_per_sec": 0 00:07:02.970 }, 00:07:02.970 "claimed": true, 00:07:02.970 "claim_type": "exclusive_write", 00:07:02.970 "zoned": false, 00:07:02.970 "supported_io_types": { 00:07:02.970 "read": true, 00:07:02.970 "write": true, 00:07:02.970 "unmap": true, 00:07:02.970 "write_zeroes": true, 00:07:02.970 "flush": true, 00:07:02.970 "reset": true, 00:07:02.970 "compare": false, 00:07:02.970 "compare_and_write": false, 00:07:02.970 "abort": true, 00:07:02.970 "nvme_admin": false, 00:07:02.970 "nvme_io": false 00:07:02.970 }, 00:07:02.970 "memory_domains": [ 00:07:02.970 { 00:07:02.970 "dma_device_id": "system", 00:07:02.970 "dma_device_type": 1 00:07:02.970 }, 00:07:02.970 { 00:07:02.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.970 "dma_device_type": 2 00:07:02.970 } 00:07:02.970 ], 00:07:02.970 "driver_specific": {} 00:07:02.970 } 00:07:02.970 ]' 00:07:02.970 13:36:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:02.970 13:36:05 -- common/autotest_common.sh@1369 -- # bs=512 00:07:02.970 13:36:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:02.970 13:36:05 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:02.970 13:36:05 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:02.970 13:36:05 -- common/autotest_common.sh@1374 -- # echo 512 00:07:02.970 13:36:05 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:02.970 13:36:05 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:04.341 13:36:06 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.341 13:36:06 -- common/autotest_common.sh@1184 -- # local i=0 00:07:04.341 13:36:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.341 13:36:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:04.341 13:36:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:06.239 13:36:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:06.239 13:36:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:06.239 13:36:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.239 13:36:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:06.239 13:36:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.239 13:36:08 -- common/autotest_common.sh@1194 -- # return 0 00:07:06.239 13:36:08 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:06.239 13:36:08 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:06.239 13:36:08 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:06.239 13:36:08 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:06.239 13:36:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:06.239 13:36:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:06.239 13:36:08 -- setup/common.sh@80 -- # echo 536870912 00:07:06.239 13:36:08 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:06.239 13:36:08 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:06.239 13:36:08 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:06.239 13:36:08 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:06.239 13:36:08 -- target/filesystem.sh@69 -- # partprobe 00:07:06.497 13:36:09 -- target/filesystem.sh@70 -- # sleep 1 00:07:07.433 13:36:10 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:07.433 13:36:10 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:07.433 13:36:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.433 13:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.434 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 ************************************ 00:07:07.692 START TEST filesystem_ext4 00:07:07.692 ************************************ 00:07:07.692 13:36:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:07.692 13:36:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:07.692 13:36:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.692 13:36:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:07.692 13:36:10 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:07.692 13:36:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.692 13:36:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.692 13:36:10 -- common/autotest_common.sh@915 -- # local force 00:07:07.692 13:36:10 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:07.692 13:36:10 -- common/autotest_common.sh@918 -- # force=-F 00:07:07.692 13:36:10 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:07.692 mke2fs 1.46.5 (30-Dec-2021) 00:07:07.692 Discarding device blocks: 0/522240 done 00:07:07.692 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:07.692 Filesystem UUID: f64f915c-346f-4f23-8552-8600a509aa81 00:07:07.692 Superblock backups stored on blocks: 00:07:07.692 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:07.692 00:07:07.692 Allocating group tables: 0/64 done 00:07:07.692 Writing inode tables: 0/64 done 00:07:07.692 Creating journal (8192 blocks): done 00:07:07.692 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.692 00:07:07.692 13:36:10 -- common/autotest_common.sh@931 -- # return 0 00:07:07.692 13:36:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.692 13:36:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.692 13:36:10 -- target/filesystem.sh@25 -- # sync 00:07:07.692 13:36:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.692 13:36:10 -- target/filesystem.sh@27 -- # sync 00:07:07.692 13:36:10 -- target/filesystem.sh@29 -- # i=0 00:07:07.692 13:36:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.692 13:36:10 -- target/filesystem.sh@37 -- # kill -0 1041052 00:07:07.692 13:36:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.692 13:36:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.692 13:36:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.692 13:36:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.692 00:07:07.692 real 0m0.167s 00:07:07.692 user 0m0.011s 00:07:07.692 sys 0m0.035s 00:07:07.692 13:36:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.692 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:07.692 ************************************ 00:07:07.692 END TEST filesystem_ext4 00:07:07.692 ************************************ 00:07:07.692 13:36:10 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.692 13:36:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.692 13:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.692 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:07.950 ************************************ 00:07:07.950 START TEST filesystem_btrfs 00:07:07.950 ************************************ 00:07:07.950 13:36:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.950 13:36:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.950 13:36:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.950 13:36:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.950 13:36:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:07.950 13:36:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.950 13:36:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.950 13:36:10 -- common/autotest_common.sh@915 -- # local force 00:07:07.950 13:36:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:07.950 13:36:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:07.950 13:36:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.950 btrfs-progs v6.6.2 00:07:07.950 See https://btrfs.readthedocs.io for more information. 00:07:07.950 00:07:07.950 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.950 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.950 this does not affect your deployments: 00:07:07.950 - DUP for metadata (-m dup) 00:07:07.950 - enabled no-holes (-O no-holes) 00:07:07.950 - enabled free-space-tree (-R free-space-tree) 00:07:07.950 00:07:07.950 Label: (null) 00:07:07.950 UUID: ef512f01-ac2d-446d-a2c7-cdc8d0fc1398 00:07:07.950 Node size: 16384 00:07:07.950 Sector size: 4096 00:07:07.950 Filesystem size: 510.00MiB 00:07:07.950 Block group profiles: 00:07:07.950 Data: single 8.00MiB 00:07:07.950 Metadata: DUP 32.00MiB 00:07:07.950 System: DUP 8.00MiB 00:07:07.950 SSD detected: yes 00:07:07.950 Zoned device: no 00:07:07.950 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.950 Runtime features: free-space-tree 00:07:07.950 Checksum: crc32c 00:07:07.950 Number of devices: 1 00:07:07.950 Devices: 00:07:07.950 ID SIZE PATH 00:07:07.950 1 510.00MiB /dev/nvme0n1p1 00:07:07.950 00:07:07.950 13:36:10 -- common/autotest_common.sh@931 -- # return 0 00:07:07.950 13:36:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.950 13:36:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.950 13:36:10 -- target/filesystem.sh@25 -- # sync 00:07:07.950 13:36:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.950 13:36:10 -- target/filesystem.sh@27 -- # sync 00:07:07.950 13:36:10 -- target/filesystem.sh@29 -- # i=0 00:07:07.950 13:36:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.950 13:36:10 -- target/filesystem.sh@37 -- # kill -0 1041052 00:07:07.950 13:36:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.950 13:36:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.950 13:36:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.950 13:36:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.950 00:07:07.950 real 0m0.171s 00:07:07.950 user 0m0.010s 00:07:07.950 sys 0m0.048s 00:07:07.950 13:36:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.950 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:07.950 ************************************ 00:07:07.950 END TEST filesystem_btrfs 00:07:07.950 ************************************ 00:07:07.950 13:36:10 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.950 13:36:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.950 13:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.950 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.212 ************************************ 00:07:08.212 START TEST filesystem_xfs 00:07:08.212 ************************************ 00:07:08.212 13:36:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.212 13:36:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.212 13:36:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.212 13:36:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.212 13:36:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:08.212 13:36:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:08.212 13:36:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:08.212 13:36:10 -- common/autotest_common.sh@915 -- # local force 00:07:08.212 13:36:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:08.212 13:36:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:08.212 13:36:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:08.212 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:08.212 = sectsz=512 attr=2, projid32bit=1 00:07:08.212 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:08.212 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:08.212 data = bsize=4096 blocks=130560, imaxpct=25 00:07:08.212 = sunit=0 swidth=0 blks 00:07:08.212 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:08.212 log =internal log bsize=4096 blocks=16384, version=2 00:07:08.212 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:08.212 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.212 Discarding blocks...Done. 00:07:08.212 13:36:10 -- common/autotest_common.sh@931 -- # return 0 00:07:08.212 13:36:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.212 13:36:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.212 13:36:10 -- target/filesystem.sh@25 -- # sync 00:07:08.212 13:36:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.212 13:36:11 -- target/filesystem.sh@27 -- # sync 00:07:08.212 13:36:11 -- target/filesystem.sh@29 -- # i=0 00:07:08.212 13:36:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.212 13:36:11 -- target/filesystem.sh@37 -- # kill -0 1041052 00:07:08.212 13:36:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.212 13:36:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.470 13:36:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.470 13:36:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.470 00:07:08.470 real 0m0.198s 00:07:08.470 user 0m0.013s 00:07:08.470 sys 0m0.034s 00:07:08.470 13:36:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.470 13:36:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.470 ************************************ 00:07:08.470 END TEST filesystem_xfs 00:07:08.470 ************************************ 00:07:08.470 13:36:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:08.470 13:36:11 -- target/filesystem.sh@93 -- # sync 00:07:08.470 13:36:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:09.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.404 13:36:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:09.404 13:36:12 -- common/autotest_common.sh@1205 -- # local i=0 00:07:09.404 13:36:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:09.404 13:36:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.404 13:36:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:09.404 13:36:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.404 13:36:12 -- common/autotest_common.sh@1217 -- # return 0 00:07:09.404 13:36:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.404 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.404 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:07:09.404 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.404 13:36:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:09.404 13:36:12 -- target/filesystem.sh@101 -- # killprocess 1041052 00:07:09.404 13:36:12 -- common/autotest_common.sh@936 -- # '[' -z 1041052 ']' 00:07:09.404 13:36:12 -- common/autotest_common.sh@940 -- # kill -0 1041052 00:07:09.404 13:36:12 -- common/autotest_common.sh@941 -- # uname 00:07:09.404 13:36:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.404 13:36:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1041052 00:07:09.661 13:36:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.661 13:36:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.661 13:36:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1041052' 00:07:09.661 killing process with pid 1041052 00:07:09.661 13:36:12 -- common/autotest_common.sh@955 -- # kill 1041052 00:07:09.662 13:36:12 -- common/autotest_common.sh@960 -- # wait 1041052 00:07:10.227 13:36:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:10.227 00:07:10.227 real 0m8.772s 00:07:10.227 user 0m34.033s 00:07:10.227 sys 0m1.057s 00:07:10.227 13:36:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.227 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 ************************************ 00:07:10.227 END TEST nvmf_filesystem_no_in_capsule 00:07:10.227 ************************************ 00:07:10.227 13:36:12 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:10.227 13:36:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.227 13:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.227 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 ************************************ 00:07:10.227 START TEST nvmf_filesystem_in_capsule 00:07:10.227 ************************************ 00:07:10.227 13:36:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:10.227 13:36:12 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:10.227 13:36:12 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:10.227 13:36:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:10.227 13:36:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:10.227 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:07:10.227 13:36:12 -- nvmf/common.sh@470 -- # nvmfpid=1042854 00:07:10.227 13:36:12 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.227 13:36:12 -- nvmf/common.sh@471 -- # waitforlisten 1042854 00:07:10.227 13:36:12 -- common/autotest_common.sh@817 -- # '[' -z 1042854 ']' 00:07:10.228 13:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.228 13:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.228 13:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.228 13:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.228 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:07:10.228 [2024-04-18 13:36:12.982294] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:10.228 [2024-04-18 13:36:12.982391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.228 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.486 [2024-04-18 13:36:13.068321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.486 [2024-04-18 13:36:13.189842] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.486 [2024-04-18 13:36:13.189906] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.486 [2024-04-18 13:36:13.189924] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.486 [2024-04-18 13:36:13.189948] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.486 [2024-04-18 13:36:13.189963] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.486 [2024-04-18 13:36:13.190027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.486 [2024-04-18 13:36:13.190081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.486 [2024-04-18 13:36:13.190133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.486 [2024-04-18 13:36:13.190137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.743 13:36:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.743 13:36:13 -- common/autotest_common.sh@850 -- # return 0 00:07:10.744 13:36:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:10.744 13:36:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:10.744 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.744 13:36:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.744 13:36:13 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:10.744 13:36:13 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:10.744 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.744 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.744 [2024-04-18 13:36:13.386661] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16cb090/0x16cf580) succeed. 00:07:10.744 [2024-04-18 13:36:13.398837] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16cc680/0x1710c10) succeed. 00:07:11.002 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.002 13:36:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.002 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.002 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 Malloc1 00:07:11.002 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.002 13:36:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.002 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.002 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.002 13:36:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.002 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.002 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.002 13:36:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:11.002 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.002 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 [2024-04-18 13:36:13.756215] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:11.002 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.002 13:36:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.002 13:36:13 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:11.002 13:36:13 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:11.002 13:36:13 -- common/autotest_common.sh@1366 -- # local bs 00:07:11.002 13:36:13 -- common/autotest_common.sh@1367 -- # local nb 00:07:11.003 13:36:13 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.003 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.003 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:11.003 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.003 13:36:13 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:11.003 { 00:07:11.003 "name": "Malloc1", 00:07:11.003 "aliases": [ 00:07:11.003 "fb7ce830-28ba-4d46-8b70-5d406c4fb940" 00:07:11.003 ], 00:07:11.003 "product_name": "Malloc disk", 00:07:11.003 "block_size": 512, 00:07:11.003 "num_blocks": 1048576, 00:07:11.003 "uuid": "fb7ce830-28ba-4d46-8b70-5d406c4fb940", 00:07:11.003 "assigned_rate_limits": { 00:07:11.003 "rw_ios_per_sec": 0, 00:07:11.003 "rw_mbytes_per_sec": 0, 00:07:11.003 "r_mbytes_per_sec": 0, 00:07:11.003 "w_mbytes_per_sec": 0 00:07:11.003 }, 00:07:11.003 "claimed": true, 00:07:11.003 "claim_type": "exclusive_write", 00:07:11.003 "zoned": false, 00:07:11.003 "supported_io_types": { 00:07:11.003 "read": true, 00:07:11.003 "write": true, 00:07:11.003 "unmap": true, 00:07:11.003 "write_zeroes": true, 00:07:11.003 "flush": true, 00:07:11.003 "reset": true, 00:07:11.003 "compare": false, 00:07:11.003 "compare_and_write": false, 00:07:11.003 "abort": true, 00:07:11.003 "nvme_admin": false, 00:07:11.003 "nvme_io": false 00:07:11.003 }, 00:07:11.003 "memory_domains": [ 00:07:11.003 { 00:07:11.003 "dma_device_id": "system", 00:07:11.003 "dma_device_type": 1 00:07:11.003 }, 00:07:11.003 { 00:07:11.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.003 "dma_device_type": 2 00:07:11.003 } 00:07:11.003 ], 00:07:11.003 "driver_specific": {} 00:07:11.003 } 00:07:11.003 ]' 00:07:11.003 13:36:13 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:11.260 13:36:13 -- common/autotest_common.sh@1369 -- # bs=512 00:07:11.260 13:36:13 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:11.260 13:36:13 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:11.261 13:36:13 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:11.261 13:36:13 -- common/autotest_common.sh@1374 -- # echo 512 00:07:11.261 13:36:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.261 13:36:13 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:12.195 13:36:14 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.195 13:36:14 -- common/autotest_common.sh@1184 -- # local i=0 00:07:12.195 13:36:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.195 13:36:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:12.195 13:36:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:14.745 13:36:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:14.745 13:36:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:14.745 13:36:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.745 13:36:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:14.745 13:36:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.745 13:36:17 -- common/autotest_common.sh@1194 -- # return 0 00:07:14.745 13:36:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:14.745 13:36:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:14.745 13:36:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:14.745 13:36:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:14.745 13:36:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:14.745 13:36:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:14.745 13:36:17 -- setup/common.sh@80 -- # echo 536870912 00:07:14.745 13:36:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:14.745 13:36:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:14.745 13:36:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:14.745 13:36:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:14.745 13:36:17 -- target/filesystem.sh@69 -- # partprobe 00:07:14.745 13:36:17 -- target/filesystem.sh@70 -- # sleep 1 00:07:15.699 13:36:18 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:15.699 13:36:18 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:15.699 13:36:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:15.699 13:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.699 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:07:15.699 ************************************ 00:07:15.699 START TEST filesystem_in_capsule_ext4 00:07:15.699 ************************************ 00:07:15.699 13:36:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:15.699 13:36:18 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:15.699 13:36:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.699 13:36:18 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:15.699 13:36:18 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:15.699 13:36:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:15.699 13:36:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:15.699 13:36:18 -- common/autotest_common.sh@915 -- # local force 00:07:15.699 13:36:18 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:15.699 13:36:18 -- common/autotest_common.sh@918 -- # force=-F 00:07:15.699 13:36:18 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:15.699 mke2fs 1.46.5 (30-Dec-2021) 00:07:15.956 Discarding device blocks: 0/522240 done 00:07:15.956 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:15.956 Filesystem UUID: 45dc0f63-8b9d-4824-b386-b402db212e41 00:07:15.956 Superblock backups stored on blocks: 00:07:15.956 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:15.956 00:07:15.956 Allocating group tables: 0/64 done 00:07:15.956 Writing inode tables: 0/64 done 00:07:15.956 Creating journal (8192 blocks): done 00:07:15.956 Writing superblocks and filesystem accounting information: 0/64 done 00:07:15.956 00:07:15.956 13:36:18 -- common/autotest_common.sh@931 -- # return 0 00:07:15.956 13:36:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.956 13:36:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.956 13:36:18 -- target/filesystem.sh@25 -- # sync 00:07:15.956 13:36:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.956 13:36:18 -- target/filesystem.sh@27 -- # sync 00:07:15.956 13:36:18 -- target/filesystem.sh@29 -- # i=0 00:07:15.956 13:36:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.956 13:36:18 -- target/filesystem.sh@37 -- # kill -0 1042854 00:07:15.956 13:36:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.956 13:36:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.956 13:36:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.956 13:36:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.956 00:07:15.956 real 0m0.180s 00:07:15.956 user 0m0.015s 00:07:15.956 sys 0m0.041s 00:07:15.956 13:36:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.956 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:07:15.956 ************************************ 00:07:15.956 END TEST filesystem_in_capsule_ext4 00:07:15.956 ************************************ 00:07:15.956 13:36:18 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:15.956 13:36:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:15.956 13:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.956 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 START TEST filesystem_in_capsule_btrfs 00:07:16.214 ************************************ 00:07:16.214 13:36:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:16.214 13:36:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:16.214 13:36:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.214 13:36:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:16.214 13:36:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:16.214 13:36:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:16.214 13:36:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:16.214 13:36:18 -- common/autotest_common.sh@915 -- # local force 00:07:16.214 13:36:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:16.214 13:36:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:16.214 13:36:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:16.214 btrfs-progs v6.6.2 00:07:16.214 See https://btrfs.readthedocs.io for more information. 00:07:16.214 00:07:16.214 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:16.214 NOTE: several default settings have changed in version 5.15, please make sure 00:07:16.214 this does not affect your deployments: 00:07:16.214 - DUP for metadata (-m dup) 00:07:16.214 - enabled no-holes (-O no-holes) 00:07:16.214 - enabled free-space-tree (-R free-space-tree) 00:07:16.214 00:07:16.214 Label: (null) 00:07:16.214 UUID: 3615dd3c-e985-44b6-b80b-9e301f31d837 00:07:16.214 Node size: 16384 00:07:16.214 Sector size: 4096 00:07:16.214 Filesystem size: 510.00MiB 00:07:16.214 Block group profiles: 00:07:16.214 Data: single 8.00MiB 00:07:16.214 Metadata: DUP 32.00MiB 00:07:16.214 System: DUP 8.00MiB 00:07:16.214 SSD detected: yes 00:07:16.214 Zoned device: no 00:07:16.214 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:16.214 Runtime features: free-space-tree 00:07:16.214 Checksum: crc32c 00:07:16.214 Number of devices: 1 00:07:16.214 Devices: 00:07:16.214 ID SIZE PATH 00:07:16.214 1 510.00MiB /dev/nvme0n1p1 00:07:16.214 00:07:16.214 13:36:18 -- common/autotest_common.sh@931 -- # return 0 00:07:16.214 13:36:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.214 13:36:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.214 13:36:18 -- target/filesystem.sh@25 -- # sync 00:07:16.214 13:36:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.214 13:36:18 -- target/filesystem.sh@27 -- # sync 00:07:16.214 13:36:18 -- target/filesystem.sh@29 -- # i=0 00:07:16.214 13:36:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.214 13:36:18 -- target/filesystem.sh@37 -- # kill -0 1042854 00:07:16.214 13:36:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.214 13:36:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.214 13:36:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.214 13:36:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.214 00:07:16.214 real 0m0.183s 00:07:16.214 user 0m0.009s 00:07:16.214 sys 0m0.055s 00:07:16.214 13:36:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.214 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 END TEST filesystem_in_capsule_btrfs 00:07:16.214 ************************************ 00:07:16.214 13:36:18 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:16.214 13:36:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:16.214 13:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.214 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:07:16.472 ************************************ 00:07:16.472 START TEST filesystem_in_capsule_xfs 00:07:16.472 ************************************ 00:07:16.472 13:36:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:16.472 13:36:19 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:16.472 13:36:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.472 13:36:19 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:16.472 13:36:19 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:16.472 13:36:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:16.472 13:36:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:16.472 13:36:19 -- common/autotest_common.sh@915 -- # local force 00:07:16.472 13:36:19 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:16.472 13:36:19 -- common/autotest_common.sh@920 -- # force=-f 00:07:16.472 13:36:19 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:16.472 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:16.472 = sectsz=512 attr=2, projid32bit=1 00:07:16.472 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:16.472 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:16.472 data = bsize=4096 blocks=130560, imaxpct=25 00:07:16.472 = sunit=0 swidth=0 blks 00:07:16.472 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:16.472 log =internal log bsize=4096 blocks=16384, version=2 00:07:16.472 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:16.472 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:16.472 Discarding blocks...Done. 00:07:16.472 13:36:19 -- common/autotest_common.sh@931 -- # return 0 00:07:16.472 13:36:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.472 13:36:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.472 13:36:19 -- target/filesystem.sh@25 -- # sync 00:07:16.472 13:36:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.472 13:36:19 -- target/filesystem.sh@27 -- # sync 00:07:16.472 13:36:19 -- target/filesystem.sh@29 -- # i=0 00:07:16.472 13:36:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.472 13:36:19 -- target/filesystem.sh@37 -- # kill -0 1042854 00:07:16.729 13:36:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.729 13:36:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.729 13:36:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.729 13:36:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.729 00:07:16.729 real 0m0.186s 00:07:16.729 user 0m0.020s 00:07:16.729 sys 0m0.027s 00:07:16.729 13:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.729 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.729 ************************************ 00:07:16.729 END TEST filesystem_in_capsule_xfs 00:07:16.729 ************************************ 00:07:16.729 13:36:19 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:16.729 13:36:19 -- target/filesystem.sh@93 -- # sync 00:07:16.729 13:36:19 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.660 13:36:20 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.660 13:36:20 -- common/autotest_common.sh@1205 -- # local i=0 00:07:17.660 13:36:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:17.660 13:36:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.660 13:36:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:17.660 13:36:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.660 13:36:20 -- common/autotest_common.sh@1217 -- # return 0 00:07:17.660 13:36:20 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.660 13:36:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.660 13:36:20 -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 13:36:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.918 13:36:20 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.918 13:36:20 -- target/filesystem.sh@101 -- # killprocess 1042854 00:07:17.918 13:36:20 -- common/autotest_common.sh@936 -- # '[' -z 1042854 ']' 00:07:17.918 13:36:20 -- common/autotest_common.sh@940 -- # kill -0 1042854 00:07:17.918 13:36:20 -- common/autotest_common.sh@941 -- # uname 00:07:17.918 13:36:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.918 13:36:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1042854 00:07:17.918 13:36:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.918 13:36:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.918 13:36:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1042854' 00:07:17.918 killing process with pid 1042854 00:07:17.918 13:36:20 -- common/autotest_common.sh@955 -- # kill 1042854 00:07:17.918 13:36:20 -- common/autotest_common.sh@960 -- # wait 1042854 00:07:18.484 13:36:21 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:18.484 00:07:18.484 real 0m8.199s 00:07:18.484 user 0m31.419s 00:07:18.484 sys 0m1.133s 00:07:18.484 13:36:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.484 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.484 ************************************ 00:07:18.484 END TEST nvmf_filesystem_in_capsule 00:07:18.484 ************************************ 00:07:18.484 13:36:21 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:18.485 13:36:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:18.485 13:36:21 -- nvmf/common.sh@117 -- # sync 00:07:18.485 13:36:21 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:18.485 13:36:21 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:18.485 13:36:21 -- nvmf/common.sh@120 -- # set +e 00:07:18.485 13:36:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.485 13:36:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:18.485 rmmod nvme_rdma 00:07:18.485 rmmod nvme_fabrics 00:07:18.485 13:36:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.485 13:36:21 -- nvmf/common.sh@124 -- # set -e 00:07:18.485 13:36:21 -- nvmf/common.sh@125 -- # return 0 00:07:18.485 13:36:21 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:18.485 13:36:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:18.485 13:36:21 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:18.485 00:07:18.485 real 0m20.346s 00:07:18.485 user 1m6.625s 00:07:18.485 sys 0m4.470s 00:07:18.485 13:36:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.485 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.485 ************************************ 00:07:18.485 END TEST nvmf_filesystem 00:07:18.485 ************************************ 00:07:18.485 13:36:21 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:18.485 13:36:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:18.485 13:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.485 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.743 ************************************ 00:07:18.743 START TEST nvmf_discovery 00:07:18.743 ************************************ 00:07:18.743 13:36:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:18.743 * Looking for test storage... 00:07:18.743 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.743 13:36:21 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.743 13:36:21 -- nvmf/common.sh@7 -- # uname -s 00:07:18.743 13:36:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.743 13:36:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.743 13:36:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.743 13:36:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.743 13:36:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.743 13:36:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.743 13:36:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.743 13:36:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.743 13:36:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.743 13:36:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.743 13:36:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:18.743 13:36:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:18.743 13:36:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.743 13:36:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.743 13:36:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.743 13:36:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.743 13:36:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.743 13:36:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.743 13:36:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.743 13:36:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.744 13:36:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.744 13:36:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.744 13:36:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.744 13:36:21 -- paths/export.sh@5 -- # export PATH 00:07:18.744 13:36:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.744 13:36:21 -- nvmf/common.sh@47 -- # : 0 00:07:18.744 13:36:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.744 13:36:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.744 13:36:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.744 13:36:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.744 13:36:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.744 13:36:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.744 13:36:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.744 13:36:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.744 13:36:21 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:18.744 13:36:21 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:18.744 13:36:21 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:18.744 13:36:21 -- target/discovery.sh@15 -- # hash nvme 00:07:18.744 13:36:21 -- target/discovery.sh@20 -- # nvmftestinit 00:07:18.744 13:36:21 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:18.744 13:36:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.744 13:36:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:18.744 13:36:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:18.744 13:36:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:18.744 13:36:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.744 13:36:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.744 13:36:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.744 13:36:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:18.744 13:36:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:18.744 13:36:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.744 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:22.026 13:36:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:22.026 13:36:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.026 13:36:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.026 13:36:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.026 13:36:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.026 13:36:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.026 13:36:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.026 13:36:24 -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.026 13:36:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.026 13:36:24 -- nvmf/common.sh@296 -- # e810=() 00:07:22.026 13:36:24 -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.026 13:36:24 -- nvmf/common.sh@297 -- # x722=() 00:07:22.026 13:36:24 -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.026 13:36:24 -- nvmf/common.sh@298 -- # mlx=() 00:07:22.026 13:36:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.026 13:36:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.026 13:36:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.027 13:36:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.027 13:36:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.027 13:36:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:22.027 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:22.027 13:36:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.027 13:36:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:22.027 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:22.027 13:36:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.027 13:36:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.027 13:36:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.027 13:36:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:22.027 Found net devices under 0000:81:00.0: mlx_0_0 00:07:22.027 13:36:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.027 13:36:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.027 13:36:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:22.027 Found net devices under 0000:81:00.1: mlx_0_1 00:07:22.027 13:36:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.027 13:36:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:22.027 13:36:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:22.027 13:36:24 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:22.027 13:36:24 -- nvmf/common.sh@58 -- # uname 00:07:22.027 13:36:24 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:22.027 13:36:24 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:22.027 13:36:24 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:22.027 13:36:24 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:22.027 13:36:24 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:22.027 13:36:24 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:22.027 13:36:24 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:22.027 13:36:24 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:22.027 13:36:24 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:22.027 13:36:24 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:22.027 13:36:24 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:22.027 13:36:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.027 13:36:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:22.027 13:36:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:22.027 13:36:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.027 13:36:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:22.027 13:36:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:22.027 13:36:24 -- nvmf/common.sh@105 -- # continue 2 00:07:22.027 13:36:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.027 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:22.027 13:36:24 -- nvmf/common.sh@105 -- # continue 2 00:07:22.027 13:36:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:22.027 13:36:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:22.027 13:36:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.027 13:36:24 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:22.027 13:36:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:22.027 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.027 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:22.027 altname enp129s0f0np0 00:07:22.027 inet 192.168.100.8/24 scope global mlx_0_0 00:07:22.027 valid_lft forever preferred_lft forever 00:07:22.027 13:36:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:22.027 13:36:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:22.027 13:36:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.027 13:36:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.027 13:36:24 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:22.027 13:36:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:22.027 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.027 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:22.027 altname enp129s0f1np1 00:07:22.027 inet 192.168.100.9/24 scope global mlx_0_1 00:07:22.027 valid_lft forever preferred_lft forever 00:07:22.027 13:36:24 -- nvmf/common.sh@411 -- # return 0 00:07:22.027 13:36:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:22.027 13:36:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:22.027 13:36:24 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:22.027 13:36:24 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:22.027 13:36:24 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:22.027 13:36:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.027 13:36:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:22.027 13:36:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:22.027 13:36:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.028 13:36:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:22.028 13:36:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.028 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.028 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.028 13:36:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:22.028 13:36:24 -- nvmf/common.sh@105 -- # continue 2 00:07:22.028 13:36:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.028 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.028 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.028 13:36:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.028 13:36:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.028 13:36:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:22.028 13:36:24 -- nvmf/common.sh@105 -- # continue 2 00:07:22.028 13:36:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:22.028 13:36:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:22.028 13:36:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.028 13:36:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:22.028 13:36:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:22.028 13:36:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.028 13:36:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.028 13:36:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:22.028 192.168.100.9' 00:07:22.028 13:36:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:22.028 192.168.100.9' 00:07:22.028 13:36:24 -- nvmf/common.sh@446 -- # head -n 1 00:07:22.028 13:36:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:22.028 13:36:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:22.028 192.168.100.9' 00:07:22.028 13:36:24 -- nvmf/common.sh@447 -- # tail -n +2 00:07:22.028 13:36:24 -- nvmf/common.sh@447 -- # head -n 1 00:07:22.028 13:36:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:22.028 13:36:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:22.028 13:36:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:22.028 13:36:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:22.028 13:36:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:22.028 13:36:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:22.028 13:36:24 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:22.028 13:36:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:22.028 13:36:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:22.028 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.028 13:36:24 -- nvmf/common.sh@470 -- # nvmfpid=1046082 00:07:22.028 13:36:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.028 13:36:24 -- nvmf/common.sh@471 -- # waitforlisten 1046082 00:07:22.028 13:36:24 -- common/autotest_common.sh@817 -- # '[' -z 1046082 ']' 00:07:22.028 13:36:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.028 13:36:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:22.028 13:36:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.028 13:36:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:22.028 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.028 [2024-04-18 13:36:24.352759] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:22.028 [2024-04-18 13:36:24.352840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.028 [2024-04-18 13:36:24.425800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.028 [2024-04-18 13:36:24.546147] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.028 [2024-04-18 13:36:24.546212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.028 [2024-04-18 13:36:24.546238] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.028 [2024-04-18 13:36:24.546251] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.028 [2024-04-18 13:36:24.546263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.028 [2024-04-18 13:36:24.546341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.028 [2024-04-18 13:36:24.546396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.028 [2024-04-18 13:36:24.546451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.028 [2024-04-18 13:36:24.546454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.962 13:36:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:22.962 13:36:25 -- common/autotest_common.sh@850 -- # return 0 00:07:22.962 13:36:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:22.962 13:36:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:22.962 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.962 13:36:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.962 13:36:25 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:22.962 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.962 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.962 [2024-04-18 13:36:25.592505] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x880090/0x884580) succeed. 00:07:22.962 [2024-04-18 13:36:25.604686] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x881680/0x8c5c10) succeed. 00:07:22.962 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@26 -- # seq 1 4 00:07:23.220 13:36:25 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:23.220 13:36:25 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 Null1 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 [2024-04-18 13:36:25.801061] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:23.220 13:36:25 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 Null2 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:23.220 13:36:25 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 Null3 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:23.220 13:36:25 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:23.220 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.220 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 Null4 00:07:23.220 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.220 13:36:25 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:23.221 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.221 13:36:25 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:23.221 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.221 13:36:25 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:23.221 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.221 13:36:25 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:23.221 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.221 13:36:25 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:23.221 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.221 13:36:25 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 4420 00:07:23.221 00:07:23.221 Discovery Log Number of Records 6, Generation counter 6 00:07:23.221 =====Discovery Log Entry 0====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: current discovery subsystem 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4420 00:07:23.221 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: explicit discovery connections, duplicate discovery information 00:07:23.221 rdma_prtype: not specified 00:07:23.221 rdma_qptype: connected 00:07:23.221 rdma_cms: rdma-cm 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 =====Discovery Log Entry 1====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: nvme subsystem 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4420 00:07:23.221 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: none 00:07:23.221 rdma_prtype: not specified 00:07:23.221 rdma_qptype: connected 00:07:23.221 rdma_cms: rdma-cm 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 =====Discovery Log Entry 2====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: nvme subsystem 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4420 00:07:23.221 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: none 00:07:23.221 rdma_prtype: not specified 00:07:23.221 rdma_qptype: connected 00:07:23.221 rdma_cms: rdma-cm 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 =====Discovery Log Entry 3====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: nvme subsystem 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4420 00:07:23.221 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: none 00:07:23.221 rdma_prtype: not specified 00:07:23.221 rdma_qptype: connected 00:07:23.221 rdma_cms: rdma-cm 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 =====Discovery Log Entry 4====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: nvme subsystem 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4420 00:07:23.221 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: none 00:07:23.221 rdma_prtype: not specified 00:07:23.221 rdma_qptype: connected 00:07:23.221 rdma_cms: rdma-cm 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 =====Discovery Log Entry 5====== 00:07:23.221 trtype: rdma 00:07:23.221 adrfam: ipv4 00:07:23.221 subtype: discovery subsystem referral 00:07:23.221 treq: not required 00:07:23.221 portid: 0 00:07:23.221 trsvcid: 4430 00:07:23.221 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:23.221 traddr: 192.168.100.8 00:07:23.221 eflags: none 00:07:23.221 rdma_prtype: unrecognized 00:07:23.221 rdma_qptype: unrecognized 00:07:23.221 rdma_cms: unrecognized 00:07:23.221 rdma_pkey: 0x0000 00:07:23.221 13:36:26 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:23.221 Perform nvmf subsystem discovery via RPC 00:07:23.221 13:36:26 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:23.221 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.221 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 [2024-04-18 13:36:26.013469] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:23.221 [ 00:07:23.221 { 00:07:23.221 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:23.221 "subtype": "Discovery", 00:07:23.221 "listen_addresses": [ 00:07:23.221 { 00:07:23.221 "transport": "RDMA", 00:07:23.221 "trtype": "RDMA", 00:07:23.221 "adrfam": "IPv4", 00:07:23.221 "traddr": "192.168.100.8", 00:07:23.221 "trsvcid": "4420" 00:07:23.221 } 00:07:23.221 ], 00:07:23.221 "allow_any_host": true, 00:07:23.221 "hosts": [] 00:07:23.221 }, 00:07:23.221 { 00:07:23.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:23.221 "subtype": "NVMe", 00:07:23.221 "listen_addresses": [ 00:07:23.221 { 00:07:23.221 "transport": "RDMA", 00:07:23.221 "trtype": "RDMA", 00:07:23.221 "adrfam": "IPv4", 00:07:23.221 "traddr": "192.168.100.8", 00:07:23.221 "trsvcid": "4420" 00:07:23.221 } 00:07:23.221 ], 00:07:23.221 "allow_any_host": true, 00:07:23.221 "hosts": [], 00:07:23.221 "serial_number": "SPDK00000000000001", 00:07:23.221 "model_number": "SPDK bdev Controller", 00:07:23.221 "max_namespaces": 32, 00:07:23.221 "min_cntlid": 1, 00:07:23.221 "max_cntlid": 65519, 00:07:23.221 "namespaces": [ 00:07:23.221 { 00:07:23.221 "nsid": 1, 00:07:23.221 "bdev_name": "Null1", 00:07:23.221 "name": "Null1", 00:07:23.221 "nguid": "DDD81E15DAD6489A9AA9E736A118D8D9", 00:07:23.221 "uuid": "ddd81e15-dad6-489a-9aa9-e736a118d8d9" 00:07:23.221 } 00:07:23.221 ] 00:07:23.221 }, 00:07:23.221 { 00:07:23.221 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:23.221 "subtype": "NVMe", 00:07:23.221 "listen_addresses": [ 00:07:23.221 { 00:07:23.221 "transport": "RDMA", 00:07:23.221 "trtype": "RDMA", 00:07:23.221 "adrfam": "IPv4", 00:07:23.221 "traddr": "192.168.100.8", 00:07:23.221 "trsvcid": "4420" 00:07:23.221 } 00:07:23.221 ], 00:07:23.221 "allow_any_host": true, 00:07:23.221 "hosts": [], 00:07:23.221 "serial_number": "SPDK00000000000002", 00:07:23.221 "model_number": "SPDK bdev Controller", 00:07:23.221 "max_namespaces": 32, 00:07:23.221 "min_cntlid": 1, 00:07:23.221 "max_cntlid": 65519, 00:07:23.221 "namespaces": [ 00:07:23.221 { 00:07:23.221 "nsid": 1, 00:07:23.221 "bdev_name": "Null2", 00:07:23.221 "name": "Null2", 00:07:23.221 "nguid": "5AD717DB471542D5AE681C438E64B13A", 00:07:23.479 "uuid": "5ad717db-4715-42d5-ae68-1c438e64b13a" 00:07:23.479 } 00:07:23.479 ] 00:07:23.479 }, 00:07:23.479 { 00:07:23.479 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:23.479 "subtype": "NVMe", 00:07:23.479 "listen_addresses": [ 00:07:23.479 { 00:07:23.479 "transport": "RDMA", 00:07:23.479 "trtype": "RDMA", 00:07:23.479 "adrfam": "IPv4", 00:07:23.479 "traddr": "192.168.100.8", 00:07:23.480 "trsvcid": "4420" 00:07:23.480 } 00:07:23.480 ], 00:07:23.480 "allow_any_host": true, 00:07:23.480 "hosts": [], 00:07:23.480 "serial_number": "SPDK00000000000003", 00:07:23.480 "model_number": "SPDK bdev Controller", 00:07:23.480 "max_namespaces": 32, 00:07:23.480 "min_cntlid": 1, 00:07:23.480 "max_cntlid": 65519, 00:07:23.480 "namespaces": [ 00:07:23.480 { 00:07:23.480 "nsid": 1, 00:07:23.480 "bdev_name": "Null3", 00:07:23.480 "name": "Null3", 00:07:23.480 "nguid": "B20CC091CE9A45A9B1936115234CD90C", 00:07:23.480 "uuid": "b20cc091-ce9a-45a9-b193-6115234cd90c" 00:07:23.480 } 00:07:23.480 ] 00:07:23.480 }, 00:07:23.480 { 00:07:23.480 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:23.480 "subtype": "NVMe", 00:07:23.480 "listen_addresses": [ 00:07:23.480 { 00:07:23.480 "transport": "RDMA", 00:07:23.480 "trtype": "RDMA", 00:07:23.480 "adrfam": "IPv4", 00:07:23.480 "traddr": "192.168.100.8", 00:07:23.480 "trsvcid": "4420" 00:07:23.480 } 00:07:23.480 ], 00:07:23.480 "allow_any_host": true, 00:07:23.480 "hosts": [], 00:07:23.480 "serial_number": "SPDK00000000000004", 00:07:23.480 "model_number": "SPDK bdev Controller", 00:07:23.480 "max_namespaces": 32, 00:07:23.480 "min_cntlid": 1, 00:07:23.480 "max_cntlid": 65519, 00:07:23.480 "namespaces": [ 00:07:23.480 { 00:07:23.480 "nsid": 1, 00:07:23.480 "bdev_name": "Null4", 00:07:23.480 "name": "Null4", 00:07:23.480 "nguid": "E50AC62D337840F781E5AFB92029D143", 00:07:23.480 "uuid": "e50ac62d-3378-40f7-81e5-afb92029d143" 00:07:23.480 } 00:07:23.480 ] 00:07:23.480 } 00:07:23.480 ] 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@42 -- # seq 1 4 00:07:23.480 13:36:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:23.480 13:36:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:23.480 13:36:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:23.480 13:36:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:23.480 13:36:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:23.480 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.480 13:36:26 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:23.480 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.480 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.480 13:36:26 -- target/discovery.sh@49 -- # check_bdevs= 00:07:23.480 13:36:26 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:23.480 13:36:26 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:23.480 13:36:26 -- target/discovery.sh@57 -- # nvmftestfini 00:07:23.480 13:36:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:23.480 13:36:26 -- nvmf/common.sh@117 -- # sync 00:07:23.480 13:36:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:23.480 13:36:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:23.480 13:36:26 -- nvmf/common.sh@120 -- # set +e 00:07:23.480 13:36:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.480 13:36:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:23.480 rmmod nvme_rdma 00:07:23.480 rmmod nvme_fabrics 00:07:23.480 13:36:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.480 13:36:26 -- nvmf/common.sh@124 -- # set -e 00:07:23.480 13:36:26 -- nvmf/common.sh@125 -- # return 0 00:07:23.480 13:36:26 -- nvmf/common.sh@478 -- # '[' -n 1046082 ']' 00:07:23.480 13:36:26 -- nvmf/common.sh@479 -- # killprocess 1046082 00:07:23.480 13:36:26 -- common/autotest_common.sh@936 -- # '[' -z 1046082 ']' 00:07:23.480 13:36:26 -- common/autotest_common.sh@940 -- # kill -0 1046082 00:07:23.480 13:36:26 -- common/autotest_common.sh@941 -- # uname 00:07:23.480 13:36:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.480 13:36:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1046082 00:07:23.480 13:36:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.480 13:36:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.480 13:36:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1046082' 00:07:23.480 killing process with pid 1046082 00:07:23.480 13:36:26 -- common/autotest_common.sh@955 -- # kill 1046082 00:07:23.480 [2024-04-18 13:36:26.236543] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:23.480 13:36:26 -- common/autotest_common.sh@960 -- # wait 1046082 00:07:24.044 13:36:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:24.044 13:36:26 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:24.044 00:07:24.044 real 0m5.291s 00:07:24.044 user 0m9.323s 00:07:24.044 sys 0m2.578s 00:07:24.044 13:36:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.044 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.044 ************************************ 00:07:24.044 END TEST nvmf_discovery 00:07:24.044 ************************************ 00:07:24.044 13:36:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:24.044 13:36:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:24.044 13:36:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.044 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.044 ************************************ 00:07:24.044 START TEST nvmf_referrals 00:07:24.044 ************************************ 00:07:24.045 13:36:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:24.045 * Looking for test storage... 00:07:24.045 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:24.045 13:36:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.045 13:36:26 -- nvmf/common.sh@7 -- # uname -s 00:07:24.045 13:36:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.045 13:36:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.045 13:36:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.045 13:36:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.045 13:36:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.045 13:36:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.045 13:36:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.045 13:36:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.045 13:36:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.045 13:36:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.045 13:36:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:24.045 13:36:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:24.045 13:36:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.045 13:36:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.045 13:36:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.045 13:36:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.045 13:36:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:24.045 13:36:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.045 13:36:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.045 13:36:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.045 13:36:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.045 13:36:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.045 13:36:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.045 13:36:26 -- paths/export.sh@5 -- # export PATH 00:07:24.045 13:36:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.045 13:36:26 -- nvmf/common.sh@47 -- # : 0 00:07:24.045 13:36:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.045 13:36:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.045 13:36:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.045 13:36:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.045 13:36:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.045 13:36:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.045 13:36:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.045 13:36:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.045 13:36:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:24.045 13:36:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:24.045 13:36:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:24.045 13:36:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:24.045 13:36:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:24.045 13:36:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:24.045 13:36:26 -- target/referrals.sh@37 -- # nvmftestinit 00:07:24.045 13:36:26 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:24.045 13:36:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.045 13:36:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:24.045 13:36:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:24.045 13:36:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:24.045 13:36:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.045 13:36:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.045 13:36:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.303 13:36:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:24.303 13:36:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:24.303 13:36:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.303 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:07:27.584 13:36:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:27.584 13:36:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.584 13:36:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.584 13:36:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.584 13:36:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.584 13:36:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.584 13:36:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.584 13:36:29 -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.584 13:36:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.584 13:36:29 -- nvmf/common.sh@296 -- # e810=() 00:07:27.584 13:36:29 -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.584 13:36:29 -- nvmf/common.sh@297 -- # x722=() 00:07:27.584 13:36:29 -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.584 13:36:29 -- nvmf/common.sh@298 -- # mlx=() 00:07:27.584 13:36:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.584 13:36:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.584 13:36:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:27.584 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:27.584 13:36:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.584 13:36:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:27.584 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:27.584 13:36:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.584 13:36:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.584 13:36:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.584 13:36:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:27.584 Found net devices under 0000:81:00.0: mlx_0_0 00:07:27.584 13:36:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.584 13:36:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.584 13:36:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:27.584 Found net devices under 0000:81:00.1: mlx_0_1 00:07:27.584 13:36:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.584 13:36:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:27.584 13:36:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:27.584 13:36:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:27.584 13:36:29 -- nvmf/common.sh@58 -- # uname 00:07:27.584 13:36:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:27.584 13:36:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:27.584 13:36:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:27.584 13:36:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:27.584 13:36:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:27.584 13:36:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:27.584 13:36:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:27.584 13:36:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:27.584 13:36:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:27.584 13:36:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:27.584 13:36:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:27.584 13:36:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.584 13:36:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:27.584 13:36:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:27.584 13:36:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.584 13:36:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:27.584 13:36:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:27.584 13:36:29 -- nvmf/common.sh@105 -- # continue 2 00:07:27.584 13:36:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.584 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:27.584 13:36:29 -- nvmf/common.sh@105 -- # continue 2 00:07:27.584 13:36:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:27.584 13:36:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:27.584 13:36:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.584 13:36:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:27.584 13:36:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:27.584 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.584 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:27.584 altname enp129s0f0np0 00:07:27.584 inet 192.168.100.8/24 scope global mlx_0_0 00:07:27.584 valid_lft forever preferred_lft forever 00:07:27.584 13:36:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:27.584 13:36:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:27.584 13:36:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.584 13:36:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.584 13:36:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:27.584 13:36:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:27.584 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.584 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:27.584 altname enp129s0f1np1 00:07:27.584 inet 192.168.100.9/24 scope global mlx_0_1 00:07:27.584 valid_lft forever preferred_lft forever 00:07:27.584 13:36:29 -- nvmf/common.sh@411 -- # return 0 00:07:27.584 13:36:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:27.584 13:36:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:27.584 13:36:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:27.584 13:36:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:27.584 13:36:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:27.585 13:36:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.585 13:36:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:27.585 13:36:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:27.585 13:36:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.585 13:36:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:27.585 13:36:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.585 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.585 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.585 13:36:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:27.585 13:36:29 -- nvmf/common.sh@105 -- # continue 2 00:07:27.585 13:36:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.585 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.585 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.585 13:36:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.585 13:36:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.585 13:36:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:27.585 13:36:29 -- nvmf/common.sh@105 -- # continue 2 00:07:27.585 13:36:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:27.585 13:36:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:27.585 13:36:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.585 13:36:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:27.585 13:36:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:27.585 13:36:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.585 13:36:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.585 13:36:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:27.585 192.168.100.9' 00:07:27.585 13:36:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:27.585 192.168.100.9' 00:07:27.585 13:36:29 -- nvmf/common.sh@446 -- # head -n 1 00:07:27.585 13:36:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:27.585 13:36:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:27.585 192.168.100.9' 00:07:27.585 13:36:29 -- nvmf/common.sh@447 -- # tail -n +2 00:07:27.585 13:36:29 -- nvmf/common.sh@447 -- # head -n 1 00:07:27.585 13:36:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:27.585 13:36:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:27.585 13:36:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:27.585 13:36:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:27.585 13:36:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:27.585 13:36:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:27.585 13:36:29 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:27.585 13:36:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:27.585 13:36:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:27.585 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:07:27.585 13:36:29 -- nvmf/common.sh@470 -- # nvmfpid=1048454 00:07:27.585 13:36:29 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.585 13:36:29 -- nvmf/common.sh@471 -- # waitforlisten 1048454 00:07:27.585 13:36:29 -- common/autotest_common.sh@817 -- # '[' -z 1048454 ']' 00:07:27.585 13:36:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.585 13:36:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:27.585 13:36:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.585 13:36:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:27.585 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:07:27.585 [2024-04-18 13:36:29.908275] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:27.585 [2024-04-18 13:36:29.908360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.585 [2024-04-18 13:36:29.986228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.585 [2024-04-18 13:36:30.110098] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.585 [2024-04-18 13:36:30.110161] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.585 [2024-04-18 13:36:30.110178] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.585 [2024-04-18 13:36:30.110193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.585 [2024-04-18 13:36:30.110205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.585 [2024-04-18 13:36:30.110296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.585 [2024-04-18 13:36:30.110354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.585 [2024-04-18 13:36:30.110401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.585 [2024-04-18 13:36:30.110405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.585 13:36:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:27.585 13:36:30 -- common/autotest_common.sh@850 -- # return 0 00:07:27.585 13:36:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:27.585 13:36:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:27.585 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.585 13:36:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.585 13:36:30 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:27.585 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.585 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.585 [2024-04-18 13:36:30.301534] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12ae090/0x12b2580) succeed. 00:07:27.585 [2024-04-18 13:36:30.313826] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12af680/0x12f3c10) succeed. 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.843 13:36:30 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:27.843 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.843 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.843 [2024-04-18 13:36:30.480232] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.843 13:36:30 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:27.843 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.843 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.843 13:36:30 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:27.843 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.843 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.843 13:36:30 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:27.843 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.843 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.843 13:36:30 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.843 13:36:30 -- target/referrals.sh@48 -- # jq length 00:07:27.843 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.843 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.843 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.844 13:36:30 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:27.844 13:36:30 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:27.844 13:36:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:27.844 13:36:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.844 13:36:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:27.844 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.844 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.844 13:36:30 -- target/referrals.sh@21 -- # sort 00:07:27.844 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.844 13:36:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.844 13:36:30 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.844 13:36:30 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:27.844 13:36:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.844 13:36:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.844 13:36:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:27.844 13:36:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.844 13:36:30 -- target/referrals.sh@26 -- # sort 00:07:28.101 13:36:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:28.101 13:36:30 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:28.101 13:36:30 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:28.101 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.101 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.101 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.101 13:36:30 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:28.101 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.101 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.101 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.101 13:36:30 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:28.101 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.102 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.102 13:36:30 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.102 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.102 13:36:30 -- target/referrals.sh@56 -- # jq length 00:07:28.102 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.102 13:36:30 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:28.102 13:36:30 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:28.102 13:36:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.102 13:36:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.102 13:36:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.102 13:36:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.102 13:36:30 -- target/referrals.sh@26 -- # sort 00:07:28.102 13:36:30 -- target/referrals.sh@26 -- # echo 00:07:28.102 13:36:30 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:28.102 13:36:30 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:28.102 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.102 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.102 13:36:30 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.102 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.102 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.102 13:36:30 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:28.102 13:36:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.102 13:36:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.102 13:36:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.102 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.102 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 13:36:30 -- target/referrals.sh@21 -- # sort 00:07:28.102 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.359 13:36:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:28.359 13:36:30 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.359 13:36:30 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:28.359 13:36:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.359 13:36:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.359 13:36:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.359 13:36:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.360 13:36:30 -- target/referrals.sh@26 -- # sort 00:07:28.360 13:36:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:28.360 13:36:31 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.360 13:36:31 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:28.360 13:36:31 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:28.360 13:36:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.360 13:36:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.360 13:36:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.360 13:36:31 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:28.360 13:36:31 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.360 13:36:31 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:28.360 13:36:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.360 13:36:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.360 13:36:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.617 13:36:31 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.617 13:36:31 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.617 13:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.617 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.617 13:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.617 13:36:31 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:28.617 13:36:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.617 13:36:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.617 13:36:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.617 13:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.617 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.617 13:36:31 -- target/referrals.sh@21 -- # sort 00:07:28.617 13:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.617 13:36:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:28.617 13:36:31 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.617 13:36:31 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:28.617 13:36:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.617 13:36:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.617 13:36:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.617 13:36:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.617 13:36:31 -- target/referrals.sh@26 -- # sort 00:07:28.617 13:36:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:28.617 13:36:31 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.617 13:36:31 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:28.617 13:36:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.617 13:36:31 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:28.617 13:36:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.617 13:36:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.875 13:36:31 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:28.875 13:36:31 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.875 13:36:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.875 13:36:31 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:28.875 13:36:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.875 13:36:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.875 13:36:31 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.875 13:36:31 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:28.875 13:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.875 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.875 13:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.875 13:36:31 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.875 13:36:31 -- target/referrals.sh@82 -- # jq length 00:07:28.875 13:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.875 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.875 13:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.875 13:36:31 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:28.875 13:36:31 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:28.875 13:36:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.875 13:36:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.875 13:36:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:28.875 13:36:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.875 13:36:31 -- target/referrals.sh@26 -- # sort 00:07:29.133 13:36:31 -- target/referrals.sh@26 -- # echo 00:07:29.133 13:36:31 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:29.133 13:36:31 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:29.133 13:36:31 -- target/referrals.sh@86 -- # nvmftestfini 00:07:29.133 13:36:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:29.133 13:36:31 -- nvmf/common.sh@117 -- # sync 00:07:29.133 13:36:31 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:29.133 13:36:31 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:29.133 13:36:31 -- nvmf/common.sh@120 -- # set +e 00:07:29.133 13:36:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.133 13:36:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:29.133 rmmod nvme_rdma 00:07:29.133 rmmod nvme_fabrics 00:07:29.133 13:36:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.133 13:36:31 -- nvmf/common.sh@124 -- # set -e 00:07:29.133 13:36:31 -- nvmf/common.sh@125 -- # return 0 00:07:29.133 13:36:31 -- nvmf/common.sh@478 -- # '[' -n 1048454 ']' 00:07:29.133 13:36:31 -- nvmf/common.sh@479 -- # killprocess 1048454 00:07:29.133 13:36:31 -- common/autotest_common.sh@936 -- # '[' -z 1048454 ']' 00:07:29.133 13:36:31 -- common/autotest_common.sh@940 -- # kill -0 1048454 00:07:29.133 13:36:31 -- common/autotest_common.sh@941 -- # uname 00:07:29.133 13:36:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.133 13:36:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1048454 00:07:29.133 13:36:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.133 13:36:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.133 13:36:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1048454' 00:07:29.133 killing process with pid 1048454 00:07:29.133 13:36:31 -- common/autotest_common.sh@955 -- # kill 1048454 00:07:29.133 13:36:31 -- common/autotest_common.sh@960 -- # wait 1048454 00:07:29.701 13:36:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:29.701 13:36:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:29.701 00:07:29.701 real 0m5.430s 00:07:29.701 user 0m9.909s 00:07:29.701 sys 0m2.716s 00:07:29.701 13:36:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.701 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.701 ************************************ 00:07:29.701 END TEST nvmf_referrals 00:07:29.701 ************************************ 00:07:29.701 13:36:32 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:29.701 13:36:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.701 13:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.701 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.701 ************************************ 00:07:29.701 START TEST nvmf_connect_disconnect 00:07:29.701 ************************************ 00:07:29.701 13:36:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:29.701 * Looking for test storage... 00:07:29.701 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:29.701 13:36:32 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.701 13:36:32 -- nvmf/common.sh@7 -- # uname -s 00:07:29.701 13:36:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.701 13:36:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.701 13:36:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.701 13:36:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.701 13:36:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.701 13:36:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.701 13:36:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.701 13:36:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.701 13:36:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.701 13:36:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.701 13:36:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:29.701 13:36:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:29.701 13:36:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.701 13:36:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.701 13:36:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.702 13:36:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.702 13:36:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:29.702 13:36:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.702 13:36:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.702 13:36:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.702 13:36:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.702 13:36:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.702 13:36:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.702 13:36:32 -- paths/export.sh@5 -- # export PATH 00:07:29.702 13:36:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.702 13:36:32 -- nvmf/common.sh@47 -- # : 0 00:07:29.702 13:36:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.702 13:36:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.702 13:36:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.702 13:36:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.702 13:36:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.702 13:36:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.702 13:36:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.702 13:36:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.702 13:36:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.702 13:36:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.702 13:36:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:29.702 13:36:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:29.702 13:36:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.702 13:36:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:29.702 13:36:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:29.702 13:36:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:29.702 13:36:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.702 13:36:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.702 13:36:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.702 13:36:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:29.702 13:36:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:29.702 13:36:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.702 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.014 13:36:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:33.014 13:36:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.014 13:36:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.014 13:36:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.014 13:36:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.014 13:36:35 -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.014 13:36:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@296 -- # e810=() 00:07:33.014 13:36:35 -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.014 13:36:35 -- nvmf/common.sh@297 -- # x722=() 00:07:33.014 13:36:35 -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.014 13:36:35 -- nvmf/common.sh@298 -- # mlx=() 00:07:33.014 13:36:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.014 13:36:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.014 13:36:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:33.014 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:33.014 13:36:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.014 13:36:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:33.014 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:33.014 13:36:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.014 13:36:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.014 13:36:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.014 13:36:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:33.014 Found net devices under 0000:81:00.0: mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.014 13:36:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.014 13:36:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:33.014 Found net devices under 0000:81:00.1: mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.014 13:36:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:33.014 13:36:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:33.014 13:36:35 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:33.014 13:36:35 -- nvmf/common.sh@58 -- # uname 00:07:33.014 13:36:35 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:33.014 13:36:35 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:33.014 13:36:35 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:33.014 13:36:35 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:33.014 13:36:35 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:33.014 13:36:35 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:33.014 13:36:35 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:33.014 13:36:35 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:33.014 13:36:35 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:33.014 13:36:35 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:33.014 13:36:35 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:33.014 13:36:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:33.014 13:36:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.014 13:36:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@105 -- # continue 2 00:07:33.014 13:36:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@105 -- # continue 2 00:07:33.014 13:36:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:33.014 13:36:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.014 13:36:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:33.014 13:36:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:33.014 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.014 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:33.014 altname enp129s0f0np0 00:07:33.014 inet 192.168.100.8/24 scope global mlx_0_0 00:07:33.014 valid_lft forever preferred_lft forever 00:07:33.014 13:36:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:33.014 13:36:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.014 13:36:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:33.014 13:36:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:33.014 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:33.014 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:33.014 altname enp129s0f1np1 00:07:33.014 inet 192.168.100.9/24 scope global mlx_0_1 00:07:33.014 valid_lft forever preferred_lft forever 00:07:33.014 13:36:35 -- nvmf/common.sh@411 -- # return 0 00:07:33.014 13:36:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:33.014 13:36:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:33.014 13:36:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:33.014 13:36:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:33.014 13:36:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:33.014 13:36:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:33.014 13:36:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.014 13:36:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:33.014 13:36:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@105 -- # continue 2 00:07:33.014 13:36:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.014 13:36:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:33.014 13:36:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@105 -- # continue 2 00:07:33.014 13:36:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:33.014 13:36:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.014 13:36:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:33.014 13:36:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.014 13:36:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.014 13:36:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:33.014 192.168.100.9' 00:07:33.014 13:36:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:33.014 192.168.100.9' 00:07:33.014 13:36:35 -- nvmf/common.sh@446 -- # head -n 1 00:07:33.014 13:36:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:33.014 13:36:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:33.014 192.168.100.9' 00:07:33.014 13:36:35 -- nvmf/common.sh@447 -- # tail -n +2 00:07:33.014 13:36:35 -- nvmf/common.sh@447 -- # head -n 1 00:07:33.014 13:36:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:33.014 13:36:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:33.014 13:36:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:33.014 13:36:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:33.014 13:36:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:33.014 13:36:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:33.014 13:36:35 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:33.014 13:36:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:33.014 13:36:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:33.015 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 13:36:35 -- nvmf/common.sh@470 -- # nvmfpid=1050780 00:07:33.015 13:36:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.015 13:36:35 -- nvmf/common.sh@471 -- # waitforlisten 1050780 00:07:33.015 13:36:35 -- common/autotest_common.sh@817 -- # '[' -z 1050780 ']' 00:07:33.015 13:36:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.015 13:36:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:33.015 13:36:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.015 13:36:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:33.015 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 [2024-04-18 13:36:35.305813] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:33.015 [2024-04-18 13:36:35.305900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.015 [2024-04-18 13:36:35.386409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.015 [2024-04-18 13:36:35.511911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.015 [2024-04-18 13:36:35.511985] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.015 [2024-04-18 13:36:35.512003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.015 [2024-04-18 13:36:35.512017] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.015 [2024-04-18 13:36:35.512029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.015 [2024-04-18 13:36:35.512118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.015 [2024-04-18 13:36:35.512172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.015 [2024-04-18 13:36:35.512221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.015 [2024-04-18 13:36:35.512224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.015 13:36:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.015 13:36:35 -- common/autotest_common.sh@850 -- # return 0 00:07:33.015 13:36:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:33.015 13:36:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:33.015 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 13:36:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.015 13:36:35 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:33.015 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.015 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 [2024-04-18 13:36:35.680066] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:33.015 [2024-04-18 13:36:35.705396] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7d4090/0x7d8580) succeed. 00:07:33.015 [2024-04-18 13:36:35.717783] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7d5680/0x819c10) succeed. 00:07:33.271 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.271 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.271 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:33.271 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.271 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.271 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.271 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:33.271 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.271 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:33.271 [2024-04-18 13:36:35.897598] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:33.271 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:33.271 13:36:35 -- target/connect_disconnect.sh@34 -- # set +x 00:07:37.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.410 13:36:57 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:55.410 13:36:57 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:55.410 13:36:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:55.410 13:36:57 -- nvmf/common.sh@117 -- # sync 00:07:55.410 13:36:57 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:55.410 13:36:57 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:55.410 13:36:57 -- nvmf/common.sh@120 -- # set +e 00:07:55.410 13:36:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.410 13:36:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:55.410 rmmod nvme_rdma 00:07:55.410 rmmod nvme_fabrics 00:07:55.410 13:36:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.410 13:36:57 -- nvmf/common.sh@124 -- # set -e 00:07:55.410 13:36:57 -- nvmf/common.sh@125 -- # return 0 00:07:55.410 13:36:57 -- nvmf/common.sh@478 -- # '[' -n 1050780 ']' 00:07:55.410 13:36:57 -- nvmf/common.sh@479 -- # killprocess 1050780 00:07:55.410 13:36:57 -- common/autotest_common.sh@936 -- # '[' -z 1050780 ']' 00:07:55.410 13:36:57 -- common/autotest_common.sh@940 -- # kill -0 1050780 00:07:55.410 13:36:57 -- common/autotest_common.sh@941 -- # uname 00:07:55.410 13:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.410 13:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1050780 00:07:55.410 13:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.410 13:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.410 13:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1050780' 00:07:55.410 killing process with pid 1050780 00:07:55.410 13:36:57 -- common/autotest_common.sh@955 -- # kill 1050780 00:07:55.410 13:36:57 -- common/autotest_common.sh@960 -- # wait 1050780 00:07:55.410 13:36:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:55.410 13:36:57 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:55.410 00:07:55.410 real 0m25.275s 00:07:55.410 user 1m28.406s 00:07:55.410 sys 0m3.097s 00:07:55.410 13:36:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.410 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:07:55.410 ************************************ 00:07:55.410 END TEST nvmf_connect_disconnect 00:07:55.410 ************************************ 00:07:55.410 13:36:57 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:55.410 13:36:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:55.410 13:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.410 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 ************************************ 00:07:55.411 START TEST nvmf_multitarget 00:07:55.411 ************************************ 00:07:55.411 13:36:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:55.411 * Looking for test storage... 00:07:55.411 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:55.411 13:36:57 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.411 13:36:57 -- nvmf/common.sh@7 -- # uname -s 00:07:55.411 13:36:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.411 13:36:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.411 13:36:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.411 13:36:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.411 13:36:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.411 13:36:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.411 13:36:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.411 13:36:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.411 13:36:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.411 13:36:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.411 13:36:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:55.411 13:36:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:55.411 13:36:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.411 13:36:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.411 13:36:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.411 13:36:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.411 13:36:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:55.411 13:36:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.411 13:36:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.411 13:36:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.411 13:36:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.411 13:36:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.411 13:36:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.411 13:36:57 -- paths/export.sh@5 -- # export PATH 00:07:55.411 13:36:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.411 13:36:57 -- nvmf/common.sh@47 -- # : 0 00:07:55.411 13:36:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.411 13:36:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.411 13:36:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.411 13:36:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.411 13:36:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.411 13:36:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.411 13:36:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.411 13:36:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.411 13:36:57 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:55.411 13:36:57 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:55.411 13:36:57 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:55.411 13:36:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.411 13:36:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:55.411 13:36:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:55.411 13:36:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:55.411 13:36:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.411 13:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.411 13:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.411 13:36:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:55.411 13:36:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:55.411 13:36:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.411 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:07:57.991 13:37:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:57.991 13:37:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.991 13:37:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.991 13:37:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.991 13:37:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.991 13:37:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.991 13:37:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.991 13:37:00 -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.991 13:37:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.991 13:37:00 -- nvmf/common.sh@296 -- # e810=() 00:07:57.991 13:37:00 -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.991 13:37:00 -- nvmf/common.sh@297 -- # x722=() 00:07:57.991 13:37:00 -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.991 13:37:00 -- nvmf/common.sh@298 -- # mlx=() 00:07:57.991 13:37:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.991 13:37:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.991 13:37:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.991 13:37:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:57.991 13:37:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:57.991 13:37:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:57.991 13:37:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.991 13:37:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.991 13:37:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:57.991 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:57.991 13:37:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:57.991 13:37:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:57.992 13:37:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:57.992 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:57.992 13:37:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:57.992 13:37:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.992 13:37:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.992 13:37:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:57.992 Found net devices under 0000:81:00.0: mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.992 13:37:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.992 13:37:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.992 13:37:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:57.992 Found net devices under 0000:81:00.1: mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.992 13:37:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:57.992 13:37:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:57.992 13:37:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:57.992 13:37:00 -- nvmf/common.sh@58 -- # uname 00:07:57.992 13:37:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:57.992 13:37:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:57.992 13:37:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:57.992 13:37:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:57.992 13:37:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:57.992 13:37:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:57.992 13:37:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:57.992 13:37:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:57.992 13:37:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:57.992 13:37:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:57.992 13:37:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:57.992 13:37:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:57.992 13:37:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:57.992 13:37:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:57.992 13:37:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:57.992 13:37:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@105 -- # continue 2 00:07:57.992 13:37:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@105 -- # continue 2 00:07:57.992 13:37:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:57.992 13:37:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:57.992 13:37:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:57.992 13:37:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:57.992 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:57.992 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:57.992 altname enp129s0f0np0 00:07:57.992 inet 192.168.100.8/24 scope global mlx_0_0 00:07:57.992 valid_lft forever preferred_lft forever 00:07:57.992 13:37:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:57.992 13:37:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:57.992 13:37:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:57.992 13:37:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:57.992 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:57.992 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:57.992 altname enp129s0f1np1 00:07:57.992 inet 192.168.100.9/24 scope global mlx_0_1 00:07:57.992 valid_lft forever preferred_lft forever 00:07:57.992 13:37:00 -- nvmf/common.sh@411 -- # return 0 00:07:57.992 13:37:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:57.992 13:37:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:57.992 13:37:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:57.992 13:37:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:57.992 13:37:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:57.992 13:37:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:57.992 13:37:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:57.992 13:37:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:57.992 13:37:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:57.992 13:37:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@105 -- # continue 2 00:07:57.992 13:37:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:57.992 13:37:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:57.992 13:37:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@105 -- # continue 2 00:07:57.992 13:37:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:57.992 13:37:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:57.992 13:37:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:57.992 13:37:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:57.992 13:37:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:57.992 13:37:00 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:57.992 192.168.100.9' 00:07:57.992 13:37:00 -- nvmf/common.sh@446 -- # head -n 1 00:07:57.992 13:37:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:57.992 192.168.100.9' 00:07:57.992 13:37:00 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:57.992 13:37:00 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:57.992 192.168.100.9' 00:07:57.992 13:37:00 -- nvmf/common.sh@447 -- # tail -n +2 00:07:57.992 13:37:00 -- nvmf/common.sh@447 -- # head -n 1 00:07:57.992 13:37:00 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:57.992 13:37:00 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:57.992 13:37:00 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:57.992 13:37:00 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:57.992 13:37:00 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:57.992 13:37:00 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:57.992 13:37:00 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:57.992 13:37:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:57.992 13:37:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:57.992 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.992 13:37:00 -- nvmf/common.sh@470 -- # nvmfpid=1055490 00:07:57.992 13:37:00 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.992 13:37:00 -- nvmf/common.sh@471 -- # waitforlisten 1055490 00:07:57.992 13:37:00 -- common/autotest_common.sh@817 -- # '[' -z 1055490 ']' 00:07:57.992 13:37:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.992 13:37:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:57.992 13:37:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.992 13:37:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:57.992 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.992 [2024-04-18 13:37:00.700447] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:57.992 [2024-04-18 13:37:00.700540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.993 [2024-04-18 13:37:00.781441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.249 [2024-04-18 13:37:00.907790] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.249 [2024-04-18 13:37:00.907861] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.249 [2024-04-18 13:37:00.907877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.249 [2024-04-18 13:37:00.907891] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.249 [2024-04-18 13:37:00.907902] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.249 [2024-04-18 13:37:00.907991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.249 [2024-04-18 13:37:00.908023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.249 [2024-04-18 13:37:00.908074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.249 [2024-04-18 13:37:00.908078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.249 13:37:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:58.249 13:37:01 -- common/autotest_common.sh@850 -- # return 0 00:07:58.249 13:37:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:58.249 13:37:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:58.249 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:07:58.588 13:37:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.588 13:37:01 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:58.588 13:37:01 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:58.588 13:37:01 -- target/multitarget.sh@21 -- # jq length 00:07:58.588 13:37:01 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:58.588 13:37:01 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:58.588 "nvmf_tgt_1" 00:07:58.588 13:37:01 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:58.843 "nvmf_tgt_2" 00:07:58.843 13:37:01 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:58.843 13:37:01 -- target/multitarget.sh@28 -- # jq length 00:07:58.843 13:37:01 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:58.843 13:37:01 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:59.099 true 00:07:59.099 13:37:01 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:59.356 true 00:07:59.356 13:37:02 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:59.356 13:37:02 -- target/multitarget.sh@35 -- # jq length 00:07:59.613 13:37:02 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:59.613 13:37:02 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:59.613 13:37:02 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:59.613 13:37:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:59.613 13:37:02 -- nvmf/common.sh@117 -- # sync 00:07:59.613 13:37:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:59.613 13:37:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:59.613 13:37:02 -- nvmf/common.sh@120 -- # set +e 00:07:59.613 13:37:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.613 13:37:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:59.613 rmmod nvme_rdma 00:07:59.613 rmmod nvme_fabrics 00:07:59.613 13:37:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.613 13:37:02 -- nvmf/common.sh@124 -- # set -e 00:07:59.613 13:37:02 -- nvmf/common.sh@125 -- # return 0 00:07:59.613 13:37:02 -- nvmf/common.sh@478 -- # '[' -n 1055490 ']' 00:07:59.613 13:37:02 -- nvmf/common.sh@479 -- # killprocess 1055490 00:07:59.613 13:37:02 -- common/autotest_common.sh@936 -- # '[' -z 1055490 ']' 00:07:59.613 13:37:02 -- common/autotest_common.sh@940 -- # kill -0 1055490 00:07:59.613 13:37:02 -- common/autotest_common.sh@941 -- # uname 00:07:59.613 13:37:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.613 13:37:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1055490 00:07:59.613 13:37:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.614 13:37:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.614 13:37:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1055490' 00:07:59.614 killing process with pid 1055490 00:07:59.614 13:37:02 -- common/autotest_common.sh@955 -- # kill 1055490 00:07:59.614 13:37:02 -- common/autotest_common.sh@960 -- # wait 1055490 00:07:59.871 13:37:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:59.871 13:37:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:59.871 00:07:59.871 real 0m4.759s 00:07:59.871 user 0m8.322s 00:07:59.871 sys 0m2.517s 00:07:59.871 13:37:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.871 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:07:59.871 ************************************ 00:07:59.871 END TEST nvmf_multitarget 00:07:59.871 ************************************ 00:07:59.871 13:37:02 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:07:59.871 13:37:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.871 13:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.871 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:08:00.130 ************************************ 00:08:00.130 START TEST nvmf_rpc 00:08:00.130 ************************************ 00:08:00.130 13:37:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:00.130 * Looking for test storage... 00:08:00.130 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.130 13:37:02 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.130 13:37:02 -- nvmf/common.sh@7 -- # uname -s 00:08:00.130 13:37:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.130 13:37:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.130 13:37:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.130 13:37:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.130 13:37:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.130 13:37:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.130 13:37:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.130 13:37:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.130 13:37:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.130 13:37:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.130 13:37:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:08:00.130 13:37:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:08:00.130 13:37:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.130 13:37:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.130 13:37:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.130 13:37:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.130 13:37:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.130 13:37:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.130 13:37:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.130 13:37:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.130 13:37:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.131 13:37:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.131 13:37:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.131 13:37:02 -- paths/export.sh@5 -- # export PATH 00:08:00.131 13:37:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.131 13:37:02 -- nvmf/common.sh@47 -- # : 0 00:08:00.131 13:37:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.131 13:37:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.131 13:37:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.131 13:37:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.131 13:37:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.131 13:37:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.131 13:37:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.131 13:37:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.131 13:37:02 -- target/rpc.sh@11 -- # loops=5 00:08:00.131 13:37:02 -- target/rpc.sh@23 -- # nvmftestinit 00:08:00.131 13:37:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:00.131 13:37:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.131 13:37:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:00.131 13:37:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:00.131 13:37:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:00.131 13:37:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.131 13:37:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.131 13:37:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.131 13:37:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:00.131 13:37:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:00.131 13:37:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.131 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:08:03.409 13:37:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.409 13:37:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.409 13:37:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.409 13:37:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.409 13:37:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.409 13:37:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.409 13:37:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.409 13:37:05 -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.409 13:37:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.409 13:37:05 -- nvmf/common.sh@296 -- # e810=() 00:08:03.409 13:37:05 -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.409 13:37:05 -- nvmf/common.sh@297 -- # x722=() 00:08:03.409 13:37:05 -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.409 13:37:05 -- nvmf/common.sh@298 -- # mlx=() 00:08:03.409 13:37:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.409 13:37:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.409 13:37:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.409 13:37:05 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:03.409 13:37:05 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:03.409 13:37:05 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:03.409 13:37:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.409 13:37:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.409 13:37:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:08:03.409 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:08:03.409 13:37:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.409 13:37:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.409 13:37:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:08:03.409 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:08:03.409 13:37:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.409 13:37:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.409 13:37:05 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:03.409 13:37:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.410 13:37:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:03.410 13:37:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.410 13:37:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:08:03.410 Found net devices under 0000:81:00.0: mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.410 13:37:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.410 13:37:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:03.410 13:37:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.410 13:37:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:08:03.410 Found net devices under 0000:81:00.1: mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.410 13:37:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:03.410 13:37:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:03.410 13:37:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:03.410 13:37:05 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:03.410 13:37:05 -- nvmf/common.sh@58 -- # uname 00:08:03.410 13:37:05 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:03.410 13:37:05 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:03.410 13:37:05 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:03.410 13:37:05 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:03.410 13:37:05 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:03.410 13:37:05 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:03.410 13:37:05 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:03.410 13:37:05 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:03.410 13:37:05 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:03.410 13:37:05 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:03.410 13:37:05 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:03.410 13:37:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.410 13:37:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:03.410 13:37:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:03.410 13:37:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.410 13:37:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:03.410 13:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@105 -- # continue 2 00:08:03.410 13:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@105 -- # continue 2 00:08:03.410 13:37:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:03.410 13:37:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.410 13:37:05 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:03.410 13:37:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:03.410 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.410 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:08:03.410 altname enp129s0f0np0 00:08:03.410 inet 192.168.100.8/24 scope global mlx_0_0 00:08:03.410 valid_lft forever preferred_lft forever 00:08:03.410 13:37:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:03.410 13:37:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.410 13:37:05 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:03.410 13:37:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:03.410 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.410 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:08:03.410 altname enp129s0f1np1 00:08:03.410 inet 192.168.100.9/24 scope global mlx_0_1 00:08:03.410 valid_lft forever preferred_lft forever 00:08:03.410 13:37:05 -- nvmf/common.sh@411 -- # return 0 00:08:03.410 13:37:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:03.410 13:37:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:03.410 13:37:05 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:03.410 13:37:05 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:03.410 13:37:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.410 13:37:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:03.410 13:37:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:03.410 13:37:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.410 13:37:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:03.410 13:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@105 -- # continue 2 00:08:03.410 13:37:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.410 13:37:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.410 13:37:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@105 -- # continue 2 00:08:03.410 13:37:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:03.410 13:37:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.410 13:37:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:03.410 13:37:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.410 13:37:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.410 13:37:05 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:03.410 192.168.100.9' 00:08:03.410 13:37:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:03.410 192.168.100.9' 00:08:03.410 13:37:05 -- nvmf/common.sh@446 -- # head -n 1 00:08:03.410 13:37:05 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:03.410 13:37:05 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:03.410 192.168.100.9' 00:08:03.410 13:37:05 -- nvmf/common.sh@447 -- # tail -n +2 00:08:03.410 13:37:05 -- nvmf/common.sh@447 -- # head -n 1 00:08:03.410 13:37:05 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:03.410 13:37:05 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:03.410 13:37:05 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:03.410 13:37:05 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:03.410 13:37:05 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:03.410 13:37:05 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:03.410 13:37:05 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:03.410 13:37:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:03.410 13:37:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:03.410 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:08:03.410 13:37:05 -- nvmf/common.sh@470 -- # nvmfpid=1057851 00:08:03.410 13:37:05 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.410 13:37:05 -- nvmf/common.sh@471 -- # waitforlisten 1057851 00:08:03.410 13:37:05 -- common/autotest_common.sh@817 -- # '[' -z 1057851 ']' 00:08:03.410 13:37:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.410 13:37:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:03.410 13:37:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.410 13:37:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:03.410 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:08:03.410 [2024-04-18 13:37:05.662060] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:03.410 [2024-04-18 13:37:05.662154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.410 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.410 [2024-04-18 13:37:05.735538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.410 [2024-04-18 13:37:05.855408] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.410 [2024-04-18 13:37:05.855477] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.410 [2024-04-18 13:37:05.855493] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.410 [2024-04-18 13:37:05.855507] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.410 [2024-04-18 13:37:05.855519] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.411 [2024-04-18 13:37:05.855625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.411 [2024-04-18 13:37:05.855702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.411 [2024-04-18 13:37:05.855753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.411 [2024-04-18 13:37:05.855756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.974 13:37:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:03.974 13:37:06 -- common/autotest_common.sh@850 -- # return 0 00:08:03.974 13:37:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:03.974 13:37:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.974 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:08:03.974 13:37:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.974 13:37:06 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:03.974 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.974 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:08:03.974 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.974 13:37:06 -- target/rpc.sh@26 -- # stats='{ 00:08:03.974 "tick_rate": 2700000000, 00:08:03.974 "poll_groups": [ 00:08:03.974 { 00:08:03.974 "name": "nvmf_tgt_poll_group_0", 00:08:03.974 "admin_qpairs": 0, 00:08:03.974 "io_qpairs": 0, 00:08:03.974 "current_admin_qpairs": 0, 00:08:03.974 "current_io_qpairs": 0, 00:08:03.975 "pending_bdev_io": 0, 00:08:03.975 "completed_nvme_io": 0, 00:08:03.975 "transports": [] 00:08:03.975 }, 00:08:03.975 { 00:08:03.975 "name": "nvmf_tgt_poll_group_1", 00:08:03.975 "admin_qpairs": 0, 00:08:03.975 "io_qpairs": 0, 00:08:03.975 "current_admin_qpairs": 0, 00:08:03.975 "current_io_qpairs": 0, 00:08:03.975 "pending_bdev_io": 0, 00:08:03.975 "completed_nvme_io": 0, 00:08:03.975 "transports": [] 00:08:03.975 }, 00:08:03.975 { 00:08:03.975 "name": "nvmf_tgt_poll_group_2", 00:08:03.975 "admin_qpairs": 0, 00:08:03.975 "io_qpairs": 0, 00:08:03.975 "current_admin_qpairs": 0, 00:08:03.975 "current_io_qpairs": 0, 00:08:03.975 "pending_bdev_io": 0, 00:08:03.975 "completed_nvme_io": 0, 00:08:03.975 "transports": [] 00:08:03.975 }, 00:08:03.975 { 00:08:03.975 "name": "nvmf_tgt_poll_group_3", 00:08:03.975 "admin_qpairs": 0, 00:08:03.975 "io_qpairs": 0, 00:08:03.975 "current_admin_qpairs": 0, 00:08:03.975 "current_io_qpairs": 0, 00:08:03.975 "pending_bdev_io": 0, 00:08:03.975 "completed_nvme_io": 0, 00:08:03.975 "transports": [] 00:08:03.975 } 00:08:03.975 ] 00:08:03.975 }' 00:08:03.975 13:37:06 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:03.975 13:37:06 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:03.975 13:37:06 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:03.975 13:37:06 -- target/rpc.sh@15 -- # wc -l 00:08:04.232 13:37:06 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:04.232 13:37:06 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:04.232 13:37:06 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:04.232 13:37:06 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:04.232 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.232 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:08:04.232 [2024-04-18 13:37:06.898626] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9210d0/0x9255c0) succeed. 00:08:04.232 [2024-04-18 13:37:06.913238] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9226c0/0x966c50) succeed. 00:08:04.490 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.490 13:37:07 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:04.490 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.490 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.490 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.490 13:37:07 -- target/rpc.sh@33 -- # stats='{ 00:08:04.490 "tick_rate": 2700000000, 00:08:04.490 "poll_groups": [ 00:08:04.490 { 00:08:04.490 "name": "nvmf_tgt_poll_group_0", 00:08:04.490 "admin_qpairs": 0, 00:08:04.490 "io_qpairs": 0, 00:08:04.490 "current_admin_qpairs": 0, 00:08:04.490 "current_io_qpairs": 0, 00:08:04.490 "pending_bdev_io": 0, 00:08:04.490 "completed_nvme_io": 0, 00:08:04.490 "transports": [ 00:08:04.490 { 00:08:04.490 "trtype": "RDMA", 00:08:04.490 "pending_data_buffer": 0, 00:08:04.490 "devices": [ 00:08:04.490 { 00:08:04.490 "name": "mlx5_0", 00:08:04.490 "polls": 21857, 00:08:04.490 "idle_polls": 21857, 00:08:04.490 "completions": 0, 00:08:04.490 "requests": 0, 00:08:04.490 "request_latency": 0, 00:08:04.490 "pending_free_request": 0, 00:08:04.490 "pending_rdma_read": 0, 00:08:04.490 "pending_rdma_write": 0, 00:08:04.490 "pending_rdma_send": 0, 00:08:04.490 "total_send_wrs": 0, 00:08:04.490 "send_doorbell_updates": 0, 00:08:04.490 "total_recv_wrs": 4096, 00:08:04.490 "recv_doorbell_updates": 1 00:08:04.490 }, 00:08:04.490 { 00:08:04.490 "name": "mlx5_1", 00:08:04.490 "polls": 21857, 00:08:04.490 "idle_polls": 21857, 00:08:04.490 "completions": 0, 00:08:04.490 "requests": 0, 00:08:04.490 "request_latency": 0, 00:08:04.490 "pending_free_request": 0, 00:08:04.490 "pending_rdma_read": 0, 00:08:04.490 "pending_rdma_write": 0, 00:08:04.490 "pending_rdma_send": 0, 00:08:04.490 "total_send_wrs": 0, 00:08:04.490 "send_doorbell_updates": 0, 00:08:04.490 "total_recv_wrs": 4096, 00:08:04.490 "recv_doorbell_updates": 1 00:08:04.490 } 00:08:04.490 ] 00:08:04.490 } 00:08:04.490 ] 00:08:04.490 }, 00:08:04.490 { 00:08:04.490 "name": "nvmf_tgt_poll_group_1", 00:08:04.490 "admin_qpairs": 0, 00:08:04.490 "io_qpairs": 0, 00:08:04.490 "current_admin_qpairs": 0, 00:08:04.490 "current_io_qpairs": 0, 00:08:04.490 "pending_bdev_io": 0, 00:08:04.490 "completed_nvme_io": 0, 00:08:04.490 "transports": [ 00:08:04.490 { 00:08:04.490 "trtype": "RDMA", 00:08:04.490 "pending_data_buffer": 0, 00:08:04.490 "devices": [ 00:08:04.490 { 00:08:04.490 "name": "mlx5_0", 00:08:04.490 "polls": 14690, 00:08:04.490 "idle_polls": 14690, 00:08:04.490 "completions": 0, 00:08:04.490 "requests": 0, 00:08:04.490 "request_latency": 0, 00:08:04.490 "pending_free_request": 0, 00:08:04.490 "pending_rdma_read": 0, 00:08:04.490 "pending_rdma_write": 0, 00:08:04.490 "pending_rdma_send": 0, 00:08:04.490 "total_send_wrs": 0, 00:08:04.490 "send_doorbell_updates": 0, 00:08:04.490 "total_recv_wrs": 4096, 00:08:04.490 "recv_doorbell_updates": 1 00:08:04.490 }, 00:08:04.490 { 00:08:04.491 "name": "mlx5_1", 00:08:04.491 "polls": 14690, 00:08:04.491 "idle_polls": 14690, 00:08:04.491 "completions": 0, 00:08:04.491 "requests": 0, 00:08:04.491 "request_latency": 0, 00:08:04.491 "pending_free_request": 0, 00:08:04.491 "pending_rdma_read": 0, 00:08:04.491 "pending_rdma_write": 0, 00:08:04.491 "pending_rdma_send": 0, 00:08:04.491 "total_send_wrs": 0, 00:08:04.491 "send_doorbell_updates": 0, 00:08:04.491 "total_recv_wrs": 4096, 00:08:04.491 "recv_doorbell_updates": 1 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 }, 00:08:04.491 { 00:08:04.491 "name": "nvmf_tgt_poll_group_2", 00:08:04.491 "admin_qpairs": 0, 00:08:04.491 "io_qpairs": 0, 00:08:04.491 "current_admin_qpairs": 0, 00:08:04.491 "current_io_qpairs": 0, 00:08:04.491 "pending_bdev_io": 0, 00:08:04.491 "completed_nvme_io": 0, 00:08:04.491 "transports": [ 00:08:04.491 { 00:08:04.491 "trtype": "RDMA", 00:08:04.491 "pending_data_buffer": 0, 00:08:04.491 "devices": [ 00:08:04.491 { 00:08:04.491 "name": "mlx5_0", 00:08:04.491 "polls": 7677, 00:08:04.491 "idle_polls": 7677, 00:08:04.491 "completions": 0, 00:08:04.491 "requests": 0, 00:08:04.491 "request_latency": 0, 00:08:04.491 "pending_free_request": 0, 00:08:04.491 "pending_rdma_read": 0, 00:08:04.491 "pending_rdma_write": 0, 00:08:04.491 "pending_rdma_send": 0, 00:08:04.491 "total_send_wrs": 0, 00:08:04.491 "send_doorbell_updates": 0, 00:08:04.491 "total_recv_wrs": 4096, 00:08:04.491 "recv_doorbell_updates": 1 00:08:04.491 }, 00:08:04.491 { 00:08:04.491 "name": "mlx5_1", 00:08:04.491 "polls": 7677, 00:08:04.491 "idle_polls": 7677, 00:08:04.491 "completions": 0, 00:08:04.491 "requests": 0, 00:08:04.491 "request_latency": 0, 00:08:04.491 "pending_free_request": 0, 00:08:04.491 "pending_rdma_read": 0, 00:08:04.491 "pending_rdma_write": 0, 00:08:04.491 "pending_rdma_send": 0, 00:08:04.491 "total_send_wrs": 0, 00:08:04.491 "send_doorbell_updates": 0, 00:08:04.491 "total_recv_wrs": 4096, 00:08:04.491 "recv_doorbell_updates": 1 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 }, 00:08:04.491 { 00:08:04.491 "name": "nvmf_tgt_poll_group_3", 00:08:04.491 "admin_qpairs": 0, 00:08:04.491 "io_qpairs": 0, 00:08:04.491 "current_admin_qpairs": 0, 00:08:04.491 "current_io_qpairs": 0, 00:08:04.491 "pending_bdev_io": 0, 00:08:04.491 "completed_nvme_io": 0, 00:08:04.491 "transports": [ 00:08:04.491 { 00:08:04.491 "trtype": "RDMA", 00:08:04.491 "pending_data_buffer": 0, 00:08:04.491 "devices": [ 00:08:04.491 { 00:08:04.491 "name": "mlx5_0", 00:08:04.491 "polls": 891, 00:08:04.491 "idle_polls": 891, 00:08:04.491 "completions": 0, 00:08:04.491 "requests": 0, 00:08:04.491 "request_latency": 0, 00:08:04.491 "pending_free_request": 0, 00:08:04.491 "pending_rdma_read": 0, 00:08:04.491 "pending_rdma_write": 0, 00:08:04.491 "pending_rdma_send": 0, 00:08:04.491 "total_send_wrs": 0, 00:08:04.491 "send_doorbell_updates": 0, 00:08:04.491 "total_recv_wrs": 4096, 00:08:04.491 "recv_doorbell_updates": 1 00:08:04.491 }, 00:08:04.491 { 00:08:04.491 "name": "mlx5_1", 00:08:04.491 "polls": 891, 00:08:04.491 "idle_polls": 891, 00:08:04.491 "completions": 0, 00:08:04.491 "requests": 0, 00:08:04.491 "request_latency": 0, 00:08:04.491 "pending_free_request": 0, 00:08:04.491 "pending_rdma_read": 0, 00:08:04.491 "pending_rdma_write": 0, 00:08:04.491 "pending_rdma_send": 0, 00:08:04.491 "total_send_wrs": 0, 00:08:04.491 "send_doorbell_updates": 0, 00:08:04.491 "total_recv_wrs": 4096, 00:08:04.491 "recv_doorbell_updates": 1 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 } 00:08:04.491 ] 00:08:04.491 }' 00:08:04.491 13:37:07 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:04.491 13:37:07 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:04.491 13:37:07 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:04.491 13:37:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:04.491 13:37:07 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:04.491 13:37:07 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:04.491 13:37:07 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:04.491 13:37:07 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:04.491 13:37:07 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:04.491 13:37:07 -- target/rpc.sh@15 -- # wc -l 00:08:04.491 13:37:07 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:04.491 13:37:07 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:04.749 13:37:07 -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:04.749 13:37:07 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:04.749 13:37:07 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:04.749 13:37:07 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:04.749 13:37:07 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:04.749 13:37:07 -- target/rpc.sh@15 -- # wc -l 00:08:04.749 13:37:07 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:04.749 13:37:07 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:04.749 13:37:07 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:04.749 13:37:07 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 Malloc1 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.749 13:37:07 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.749 13:37:07 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.749 13:37:07 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.749 13:37:07 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 [2024-04-18 13:37:07.484589] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.749 13:37:07 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:08:04.749 13:37:07 -- common/autotest_common.sh@638 -- # local es=0 00:08:04.749 13:37:07 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:08:04.749 13:37:07 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:04.749 13:37:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.749 13:37:07 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:04.749 13:37:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.749 13:37:07 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:04.749 13:37:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.749 13:37:07 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:04.749 13:37:07 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:04.749 13:37:07 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:08:04.749 [2024-04-18 13:37:07.524451] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911' 00:08:04.749 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:04.749 could not add new controller: failed to write to nvme-fabrics device 00:08:04.749 13:37:07 -- common/autotest_common.sh@641 -- # es=1 00:08:04.749 13:37:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:04.749 13:37:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:04.749 13:37:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:04.749 13:37:07 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:08:04.749 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.749 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.006 13:37:07 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:05.938 13:37:08 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.938 13:37:08 -- common/autotest_common.sh@1184 -- # local i=0 00:08:05.938 13:37:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.938 13:37:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:05.938 13:37:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:08.462 13:37:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:08.462 13:37:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:08.462 13:37:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.462 13:37:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:08.462 13:37:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.462 13:37:10 -- common/autotest_common.sh@1194 -- # return 0 00:08:08.462 13:37:10 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.025 13:37:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.025 13:37:11 -- common/autotest_common.sh@1205 -- # local i=0 00:08:09.025 13:37:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:09.025 13:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.025 13:37:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:09.025 13:37:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.025 13:37:11 -- common/autotest_common.sh@1217 -- # return 0 00:08:09.025 13:37:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:08:09.025 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:09.025 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.025 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:09.025 13:37:11 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:09.025 13:37:11 -- common/autotest_common.sh@638 -- # local es=0 00:08:09.025 13:37:11 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:09.025 13:37:11 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:09.025 13:37:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:09.025 13:37:11 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:09.282 13:37:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:09.282 13:37:11 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:09.282 13:37:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:09.282 13:37:11 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:09.282 13:37:11 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:09.283 13:37:11 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:09.283 [2024-04-18 13:37:11.867307] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911' 00:08:09.283 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:09.283 could not add new controller: failed to write to nvme-fabrics device 00:08:09.283 13:37:11 -- common/autotest_common.sh@641 -- # es=1 00:08:09.283 13:37:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:09.283 13:37:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:09.283 13:37:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:09.283 13:37:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:09.283 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:09.283 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:08:09.283 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:09.283 13:37:11 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:10.213 13:37:12 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.213 13:37:12 -- common/autotest_common.sh@1184 -- # local i=0 00:08:10.213 13:37:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.213 13:37:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:10.213 13:37:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:12.736 13:37:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:12.736 13:37:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:12.736 13:37:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.736 13:37:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:12.736 13:37:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.736 13:37:15 -- common/autotest_common.sh@1194 -- # return 0 00:08:12.736 13:37:15 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.715 13:37:16 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.715 13:37:16 -- common/autotest_common.sh@1205 -- # local i=0 00:08:13.715 13:37:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:13.715 13:37:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.715 13:37:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:13.715 13:37:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.715 13:37:16 -- common/autotest_common.sh@1217 -- # return 0 00:08:13.715 13:37:16 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.715 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.715 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.715 13:37:16 -- target/rpc.sh@81 -- # seq 1 5 00:08:13.715 13:37:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:13.715 13:37:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:13.715 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.715 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.715 13:37:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:13.715 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.715 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 [2024-04-18 13:37:16.194620] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:13.715 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.715 13:37:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:13.715 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.715 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.715 13:37:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:13.715 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.715 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.715 13:37:16 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:14.646 13:37:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.646 13:37:17 -- common/autotest_common.sh@1184 -- # local i=0 00:08:14.646 13:37:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.646 13:37:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:14.646 13:37:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:16.540 13:37:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:16.540 13:37:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:16.540 13:37:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:16.540 13:37:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:16.540 13:37:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:16.540 13:37:19 -- common/autotest_common.sh@1194 -- # return 0 00:08:16.540 13:37:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.910 13:37:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.910 13:37:20 -- common/autotest_common.sh@1205 -- # local i=0 00:08:17.910 13:37:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:17.910 13:37:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.910 13:37:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:17.910 13:37:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.910 13:37:20 -- common/autotest_common.sh@1217 -- # return 0 00:08:17.910 13:37:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:17.910 13:37:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 [2024-04-18 13:37:20.500451] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.910 13:37:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.910 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 13:37:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.910 13:37:20 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:18.840 13:37:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.840 13:37:21 -- common/autotest_common.sh@1184 -- # local i=0 00:08:18.840 13:37:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.840 13:37:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:18.840 13:37:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:21.363 13:37:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:21.363 13:37:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:21.363 13:37:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.363 13:37:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:21.363 13:37:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.363 13:37:23 -- common/autotest_common.sh@1194 -- # return 0 00:08:21.363 13:37:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.294 13:37:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.294 13:37:24 -- common/autotest_common.sh@1205 -- # local i=0 00:08:22.294 13:37:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:22.294 13:37:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.294 13:37:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:22.294 13:37:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.294 13:37:24 -- common/autotest_common.sh@1217 -- # return 0 00:08:22.294 13:37:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:22.294 13:37:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 [2024-04-18 13:37:24.817709] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.294 13:37:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.294 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.294 13:37:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.294 13:37:24 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:23.224 13:37:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.224 13:37:25 -- common/autotest_common.sh@1184 -- # local i=0 00:08:23.224 13:37:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.225 13:37:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:23.225 13:37:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:25.748 13:37:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:25.748 13:37:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:25.748 13:37:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.748 13:37:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:25.748 13:37:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.748 13:37:27 -- common/autotest_common.sh@1194 -- # return 0 00:08:25.748 13:37:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:26.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.313 13:37:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:26.313 13:37:29 -- common/autotest_common.sh@1205 -- # local i=0 00:08:26.313 13:37:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:26.313 13:37:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.313 13:37:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:26.313 13:37:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.313 13:37:29 -- common/autotest_common.sh@1217 -- # return 0 00:08:26.313 13:37:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.313 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.313 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.313 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.313 13:37:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.313 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.313 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.313 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.313 13:37:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:26.313 13:37:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:26.313 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.313 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.570 13:37:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.570 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.570 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 [2024-04-18 13:37:29.127441] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.570 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.570 13:37:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:26.570 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.570 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.570 13:37:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:26.570 13:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.570 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.570 13:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.570 13:37:29 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:27.501 13:37:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.501 13:37:30 -- common/autotest_common.sh@1184 -- # local i=0 00:08:27.501 13:37:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.501 13:37:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:27.501 13:37:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:30.026 13:37:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:30.026 13:37:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:30.026 13:37:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.026 13:37:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:30.026 13:37:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.026 13:37:32 -- common/autotest_common.sh@1194 -- # return 0 00:08:30.026 13:37:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.590 13:37:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.590 13:37:33 -- common/autotest_common.sh@1205 -- # local i=0 00:08:30.590 13:37:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:30.590 13:37:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.590 13:37:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:30.590 13:37:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.590 13:37:33 -- common/autotest_common.sh@1217 -- # return 0 00:08:30.590 13:37:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.590 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.590 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.590 13:37:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.590 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.590 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.590 13:37:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:30.590 13:37:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.590 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.590 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.590 13:37:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:30.590 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.590 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.590 [2024-04-18 13:37:33.390591] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:30.847 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.847 13:37:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:30.847 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.847 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.847 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.847 13:37:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.847 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.847 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.847 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.847 13:37:33 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:31.809 13:37:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.809 13:37:34 -- common/autotest_common.sh@1184 -- # local i=0 00:08:31.809 13:37:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.809 13:37:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:31.809 13:37:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:34.331 13:37:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:34.331 13:37:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:34.331 13:37:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.331 13:37:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:34.331 13:37:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.331 13:37:36 -- common/autotest_common.sh@1194 -- # return 0 00:08:34.331 13:37:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.895 13:37:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.895 13:37:37 -- common/autotest_common.sh@1205 -- # local i=0 00:08:34.895 13:37:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:34.895 13:37:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.895 13:37:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:34.895 13:37:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.895 13:37:37 -- common/autotest_common.sh@1217 -- # return 0 00:08:34.895 13:37:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@99 -- # seq 1 5 00:08:34.895 13:37:37 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.895 13:37:37 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 [2024-04-18 13:37:37.666013] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.895 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.895 13:37:37 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.895 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.895 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:35.153 13:37:37 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 [2024-04-18 13:37:37.715314] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:35.153 13:37:37 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 [2024-04-18 13:37:37.763810] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:35.153 13:37:37 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 [2024-04-18 13:37:37.812363] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:35.153 13:37:37 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.153 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.153 13:37:37 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.153 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.153 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 [2024-04-18 13:37:37.860887] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.154 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.154 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.154 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.154 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.154 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.154 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.154 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.154 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:35.154 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.154 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.154 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.154 13:37:37 -- target/rpc.sh@110 -- # stats='{ 00:08:35.154 "tick_rate": 2700000000, 00:08:35.154 "poll_groups": [ 00:08:35.154 { 00:08:35.154 "name": "nvmf_tgt_poll_group_0", 00:08:35.154 "admin_qpairs": 2, 00:08:35.154 "io_qpairs": 27, 00:08:35.154 "current_admin_qpairs": 0, 00:08:35.154 "current_io_qpairs": 0, 00:08:35.154 "pending_bdev_io": 0, 00:08:35.154 "completed_nvme_io": 78, 00:08:35.154 "transports": [ 00:08:35.154 { 00:08:35.154 "trtype": "RDMA", 00:08:35.154 "pending_data_buffer": 0, 00:08:35.154 "devices": [ 00:08:35.154 { 00:08:35.154 "name": "mlx5_0", 00:08:35.154 "polls": 4263388, 00:08:35.154 "idle_polls": 4263137, 00:08:35.154 "completions": 271, 00:08:35.154 "requests": 135, 00:08:35.154 "request_latency": 30765765, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 213, 00:08:35.154 "send_doorbell_updates": 126, 00:08:35.154 "total_recv_wrs": 4231, 00:08:35.154 "recv_doorbell_updates": 126 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "mlx5_1", 00:08:35.154 "polls": 4263388, 00:08:35.154 "idle_polls": 4263388, 00:08:35.154 "completions": 0, 00:08:35.154 "requests": 0, 00:08:35.154 "request_latency": 0, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 0, 00:08:35.154 "send_doorbell_updates": 0, 00:08:35.154 "total_recv_wrs": 4096, 00:08:35.154 "recv_doorbell_updates": 1 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "nvmf_tgt_poll_group_1", 00:08:35.154 "admin_qpairs": 2, 00:08:35.154 "io_qpairs": 26, 00:08:35.154 "current_admin_qpairs": 0, 00:08:35.154 "current_io_qpairs": 0, 00:08:35.154 "pending_bdev_io": 0, 00:08:35.154 "completed_nvme_io": 76, 00:08:35.154 "transports": [ 00:08:35.154 { 00:08:35.154 "trtype": "RDMA", 00:08:35.154 "pending_data_buffer": 0, 00:08:35.154 "devices": [ 00:08:35.154 { 00:08:35.154 "name": "mlx5_0", 00:08:35.154 "polls": 4441474, 00:08:35.154 "idle_polls": 4441234, 00:08:35.154 "completions": 262, 00:08:35.154 "requests": 131, 00:08:35.154 "request_latency": 30658697, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 208, 00:08:35.154 "send_doorbell_updates": 121, 00:08:35.154 "total_recv_wrs": 4227, 00:08:35.154 "recv_doorbell_updates": 122 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "mlx5_1", 00:08:35.154 "polls": 4441474, 00:08:35.154 "idle_polls": 4441474, 00:08:35.154 "completions": 0, 00:08:35.154 "requests": 0, 00:08:35.154 "request_latency": 0, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 0, 00:08:35.154 "send_doorbell_updates": 0, 00:08:35.154 "total_recv_wrs": 4096, 00:08:35.154 "recv_doorbell_updates": 1 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "nvmf_tgt_poll_group_2", 00:08:35.154 "admin_qpairs": 1, 00:08:35.154 "io_qpairs": 26, 00:08:35.154 "current_admin_qpairs": 0, 00:08:35.154 "current_io_qpairs": 0, 00:08:35.154 "pending_bdev_io": 0, 00:08:35.154 "completed_nvme_io": 175, 00:08:35.154 "transports": [ 00:08:35.154 { 00:08:35.154 "trtype": "RDMA", 00:08:35.154 "pending_data_buffer": 0, 00:08:35.154 "devices": [ 00:08:35.154 { 00:08:35.154 "name": "mlx5_0", 00:08:35.154 "polls": 4404040, 00:08:35.154 "idle_polls": 4403691, 00:08:35.154 "completions": 409, 00:08:35.154 "requests": 204, 00:08:35.154 "request_latency": 70105650, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 368, 00:08:35.154 "send_doorbell_updates": 173, 00:08:35.154 "total_recv_wrs": 4300, 00:08:35.154 "recv_doorbell_updates": 173 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "mlx5_1", 00:08:35.154 "polls": 4404040, 00:08:35.154 "idle_polls": 4404040, 00:08:35.154 "completions": 0, 00:08:35.154 "requests": 0, 00:08:35.154 "request_latency": 0, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 0, 00:08:35.154 "send_doorbell_updates": 0, 00:08:35.154 "total_recv_wrs": 4096, 00:08:35.154 "recv_doorbell_updates": 1 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 } 00:08:35.154 ] 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "nvmf_tgt_poll_group_3", 00:08:35.154 "admin_qpairs": 2, 00:08:35.154 "io_qpairs": 26, 00:08:35.154 "current_admin_qpairs": 0, 00:08:35.154 "current_io_qpairs": 0, 00:08:35.154 "pending_bdev_io": 0, 00:08:35.154 "completed_nvme_io": 126, 00:08:35.154 "transports": [ 00:08:35.154 { 00:08:35.154 "trtype": "RDMA", 00:08:35.154 "pending_data_buffer": 0, 00:08:35.154 "devices": [ 00:08:35.154 { 00:08:35.154 "name": "mlx5_0", 00:08:35.154 "polls": 3216779, 00:08:35.154 "idle_polls": 3216454, 00:08:35.154 "completions": 366, 00:08:35.154 "requests": 183, 00:08:35.154 "request_latency": 52297848, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 311, 00:08:35.154 "send_doorbell_updates": 163, 00:08:35.154 "total_recv_wrs": 4279, 00:08:35.154 "recv_doorbell_updates": 164 00:08:35.154 }, 00:08:35.154 { 00:08:35.154 "name": "mlx5_1", 00:08:35.154 "polls": 3216779, 00:08:35.154 "idle_polls": 3216779, 00:08:35.154 "completions": 0, 00:08:35.154 "requests": 0, 00:08:35.154 "request_latency": 0, 00:08:35.154 "pending_free_request": 0, 00:08:35.154 "pending_rdma_read": 0, 00:08:35.154 "pending_rdma_write": 0, 00:08:35.154 "pending_rdma_send": 0, 00:08:35.154 "total_send_wrs": 0, 00:08:35.155 "send_doorbell_updates": 0, 00:08:35.155 "total_recv_wrs": 4096, 00:08:35.155 "recv_doorbell_updates": 1 00:08:35.155 } 00:08:35.155 ] 00:08:35.155 } 00:08:35.155 ] 00:08:35.155 } 00:08:35.155 ] 00:08:35.155 }' 00:08:35.155 13:37:37 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:35.155 13:37:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:35.155 13:37:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:35.155 13:37:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.412 13:37:37 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:35.412 13:37:37 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:35.412 13:37:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:35.412 13:37:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:35.412 13:37:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.412 13:37:38 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:08:35.412 13:37:38 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:08:35.412 13:37:38 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:08:35.412 13:37:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:08:35.412 13:37:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:08:35.412 13:37:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.412 13:37:38 -- target/rpc.sh@117 -- # (( 1308 > 0 )) 00:08:35.412 13:37:38 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:08:35.412 13:37:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:08:35.412 13:37:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:08:35.412 13:37:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.412 13:37:38 -- target/rpc.sh@118 -- # (( 183827960 > 0 )) 00:08:35.412 13:37:38 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:35.412 13:37:38 -- target/rpc.sh@123 -- # nvmftestfini 00:08:35.412 13:37:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:35.412 13:37:38 -- nvmf/common.sh@117 -- # sync 00:08:35.412 13:37:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:35.412 13:37:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:35.412 13:37:38 -- nvmf/common.sh@120 -- # set +e 00:08:35.412 13:37:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.412 13:37:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:35.412 rmmod nvme_rdma 00:08:35.412 rmmod nvme_fabrics 00:08:35.412 13:37:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.412 13:37:38 -- nvmf/common.sh@124 -- # set -e 00:08:35.412 13:37:38 -- nvmf/common.sh@125 -- # return 0 00:08:35.412 13:37:38 -- nvmf/common.sh@478 -- # '[' -n 1057851 ']' 00:08:35.412 13:37:38 -- nvmf/common.sh@479 -- # killprocess 1057851 00:08:35.412 13:37:38 -- common/autotest_common.sh@936 -- # '[' -z 1057851 ']' 00:08:35.412 13:37:38 -- common/autotest_common.sh@940 -- # kill -0 1057851 00:08:35.412 13:37:38 -- common/autotest_common.sh@941 -- # uname 00:08:35.412 13:37:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.412 13:37:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1057851 00:08:35.412 13:37:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.412 13:37:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.412 13:37:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1057851' 00:08:35.412 killing process with pid 1057851 00:08:35.412 13:37:38 -- common/autotest_common.sh@955 -- # kill 1057851 00:08:35.412 13:37:38 -- common/autotest_common.sh@960 -- # wait 1057851 00:08:35.978 13:37:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:35.978 13:37:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:35.978 00:08:35.978 real 0m35.904s 00:08:35.978 user 2m11.450s 00:08:35.978 sys 0m3.518s 00:08:35.978 13:37:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:35.978 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.978 ************************************ 00:08:35.978 END TEST nvmf_rpc 00:08:35.978 ************************************ 00:08:35.978 13:37:38 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:35.978 13:37:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:35.978 13:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.978 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.978 ************************************ 00:08:35.978 START TEST nvmf_invalid 00:08:35.978 ************************************ 00:08:35.978 13:37:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:35.978 * Looking for test storage... 00:08:36.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:36.237 13:37:38 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.237 13:37:38 -- nvmf/common.sh@7 -- # uname -s 00:08:36.237 13:37:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.237 13:37:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.237 13:37:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.237 13:37:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.237 13:37:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.237 13:37:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.237 13:37:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.237 13:37:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.237 13:37:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.237 13:37:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.237 13:37:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:08:36.237 13:37:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:08:36.237 13:37:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.237 13:37:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.237 13:37:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.237 13:37:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.237 13:37:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:36.237 13:37:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.237 13:37:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.237 13:37:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.237 13:37:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.237 13:37:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.237 13:37:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.237 13:37:38 -- paths/export.sh@5 -- # export PATH 00:08:36.237 13:37:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.237 13:37:38 -- nvmf/common.sh@47 -- # : 0 00:08:36.237 13:37:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.237 13:37:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.237 13:37:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.237 13:37:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.237 13:37:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.237 13:37:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.237 13:37:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.237 13:37:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.237 13:37:38 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:36.237 13:37:38 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:36.237 13:37:38 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:36.237 13:37:38 -- target/invalid.sh@14 -- # target=foobar 00:08:36.237 13:37:38 -- target/invalid.sh@16 -- # RANDOM=0 00:08:36.237 13:37:38 -- target/invalid.sh@34 -- # nvmftestinit 00:08:36.237 13:37:38 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:36.237 13:37:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.237 13:37:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.237 13:37:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.237 13:37:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.237 13:37:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.237 13:37:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.237 13:37:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.237 13:37:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:36.237 13:37:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:36.237 13:37:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.237 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:38.764 13:37:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.764 13:37:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.764 13:37:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.764 13:37:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.764 13:37:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.764 13:37:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.764 13:37:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.764 13:37:41 -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.764 13:37:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.764 13:37:41 -- nvmf/common.sh@296 -- # e810=() 00:08:38.764 13:37:41 -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.764 13:37:41 -- nvmf/common.sh@297 -- # x722=() 00:08:38.764 13:37:41 -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.764 13:37:41 -- nvmf/common.sh@298 -- # mlx=() 00:08:38.764 13:37:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.764 13:37:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.764 13:37:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:08:38.764 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:08:38.764 13:37:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.764 13:37:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:08:38.764 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:08:38.764 13:37:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.764 13:37:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.764 13:37:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.764 13:37:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:08:38.764 Found net devices under 0000:81:00.0: mlx_0_0 00:08:38.764 13:37:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.764 13:37:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.764 13:37:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:08:38.764 Found net devices under 0000:81:00.1: mlx_0_1 00:08:38.764 13:37:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.764 13:37:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:38.764 13:37:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:38.764 13:37:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:38.764 13:37:41 -- nvmf/common.sh@58 -- # uname 00:08:38.764 13:37:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:38.764 13:37:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:38.764 13:37:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:38.764 13:37:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:38.764 13:37:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:38.764 13:37:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:38.764 13:37:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:38.764 13:37:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:38.764 13:37:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:38.764 13:37:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.764 13:37:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:38.764 13:37:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.764 13:37:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:38.764 13:37:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:38.764 13:37:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.764 13:37:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:38.764 13:37:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:38.764 13:37:41 -- nvmf/common.sh@105 -- # continue 2 00:08:38.764 13:37:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.764 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:38.764 13:37:41 -- nvmf/common.sh@105 -- # continue 2 00:08:38.764 13:37:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:38.764 13:37:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:38.764 13:37:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:38.764 13:37:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:38.764 13:37:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.764 13:37:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.764 13:37:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:38.764 13:37:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:38.764 13:37:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:38.764 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.764 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:08:38.764 altname enp129s0f0np0 00:08:38.764 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.764 valid_lft forever preferred_lft forever 00:08:38.764 13:37:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:38.764 13:37:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:38.764 13:37:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:38.764 13:37:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.765 13:37:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:38.765 13:37:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:38.765 13:37:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:38.765 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.765 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:08:38.765 altname enp129s0f1np1 00:08:38.765 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.765 valid_lft forever preferred_lft forever 00:08:38.765 13:37:41 -- nvmf/common.sh@411 -- # return 0 00:08:38.765 13:37:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:38.765 13:37:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.765 13:37:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:38.765 13:37:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:38.765 13:37:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:38.765 13:37:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.765 13:37:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:38.765 13:37:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:38.765 13:37:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.765 13:37:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:38.765 13:37:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.765 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.765 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.765 13:37:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:38.765 13:37:41 -- nvmf/common.sh@105 -- # continue 2 00:08:38.765 13:37:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.765 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.765 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.765 13:37:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.765 13:37:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.765 13:37:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:38.765 13:37:41 -- nvmf/common.sh@105 -- # continue 2 00:08:38.765 13:37:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:38.765 13:37:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:38.765 13:37:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.765 13:37:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:38.765 13:37:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:38.765 13:37:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.765 13:37:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.765 13:37:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.765 192.168.100.9' 00:08:38.765 13:37:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:38.765 192.168.100.9' 00:08:38.765 13:37:41 -- nvmf/common.sh@446 -- # head -n 1 00:08:38.765 13:37:41 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.765 13:37:41 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:38.765 192.168.100.9' 00:08:38.765 13:37:41 -- nvmf/common.sh@447 -- # tail -n +2 00:08:38.765 13:37:41 -- nvmf/common.sh@447 -- # head -n 1 00:08:38.765 13:37:41 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.765 13:37:41 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:38.765 13:37:41 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.765 13:37:41 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:38.765 13:37:41 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:38.765 13:37:41 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:38.765 13:37:41 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:38.765 13:37:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:38.765 13:37:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:38.765 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:08:38.765 13:37:41 -- nvmf/common.sh@470 -- # nvmfpid=1063856 00:08:38.765 13:37:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.765 13:37:41 -- nvmf/common.sh@471 -- # waitforlisten 1063856 00:08:38.765 13:37:41 -- common/autotest_common.sh@817 -- # '[' -z 1063856 ']' 00:08:38.765 13:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.765 13:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:38.765 13:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.765 13:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:38.765 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:08:39.023 [2024-04-18 13:37:41.601026] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:39.023 [2024-04-18 13:37:41.601114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.023 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.023 [2024-04-18 13:37:41.680878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.023 [2024-04-18 13:37:41.806849] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.023 [2024-04-18 13:37:41.806914] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.023 [2024-04-18 13:37:41.806930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.023 [2024-04-18 13:37:41.806952] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.023 [2024-04-18 13:37:41.806965] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.023 [2024-04-18 13:37:41.807028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.023 [2024-04-18 13:37:41.807085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.023 [2024-04-18 13:37:41.807141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.023 [2024-04-18 13:37:41.807144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.281 13:37:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:39.281 13:37:41 -- common/autotest_common.sh@850 -- # return 0 00:08:39.281 13:37:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:39.281 13:37:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:39.281 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:08:39.281 13:37:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.281 13:37:41 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:39.281 13:37:41 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1596 00:08:39.539 [2024-04-18 13:37:42.298958] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:39.539 13:37:42 -- target/invalid.sh@40 -- # out='request: 00:08:39.539 { 00:08:39.539 "nqn": "nqn.2016-06.io.spdk:cnode1596", 00:08:39.539 "tgt_name": "foobar", 00:08:39.539 "method": "nvmf_create_subsystem", 00:08:39.539 "req_id": 1 00:08:39.539 } 00:08:39.539 Got JSON-RPC error response 00:08:39.539 response: 00:08:39.539 { 00:08:39.539 "code": -32603, 00:08:39.539 "message": "Unable to find target foobar" 00:08:39.539 }' 00:08:39.539 13:37:42 -- target/invalid.sh@41 -- # [[ request: 00:08:39.539 { 00:08:39.539 "nqn": "nqn.2016-06.io.spdk:cnode1596", 00:08:39.539 "tgt_name": "foobar", 00:08:39.539 "method": "nvmf_create_subsystem", 00:08:39.539 "req_id": 1 00:08:39.539 } 00:08:39.539 Got JSON-RPC error response 00:08:39.539 response: 00:08:39.539 { 00:08:39.539 "code": -32603, 00:08:39.539 "message": "Unable to find target foobar" 00:08:39.539 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:39.539 13:37:42 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:39.539 13:37:42 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3943 00:08:40.105 [2024-04-18 13:37:42.652163] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3943: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:40.105 13:37:42 -- target/invalid.sh@45 -- # out='request: 00:08:40.105 { 00:08:40.105 "nqn": "nqn.2016-06.io.spdk:cnode3943", 00:08:40.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:40.105 "method": "nvmf_create_subsystem", 00:08:40.105 "req_id": 1 00:08:40.105 } 00:08:40.105 Got JSON-RPC error response 00:08:40.105 response: 00:08:40.105 { 00:08:40.105 "code": -32602, 00:08:40.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:40.105 }' 00:08:40.105 13:37:42 -- target/invalid.sh@46 -- # [[ request: 00:08:40.105 { 00:08:40.105 "nqn": "nqn.2016-06.io.spdk:cnode3943", 00:08:40.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:40.105 "method": "nvmf_create_subsystem", 00:08:40.105 "req_id": 1 00:08:40.105 } 00:08:40.105 Got JSON-RPC error response 00:08:40.105 response: 00:08:40.105 { 00:08:40.105 "code": -32602, 00:08:40.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:40.105 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:40.105 13:37:42 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:40.105 13:37:42 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5399 00:08:40.363 [2024-04-18 13:37:42.997395] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5399: invalid model number 'SPDK_Controller' 00:08:40.363 13:37:43 -- target/invalid.sh@50 -- # out='request: 00:08:40.363 { 00:08:40.363 "nqn": "nqn.2016-06.io.spdk:cnode5399", 00:08:40.363 "model_number": "SPDK_Controller\u001f", 00:08:40.363 "method": "nvmf_create_subsystem", 00:08:40.363 "req_id": 1 00:08:40.363 } 00:08:40.363 Got JSON-RPC error response 00:08:40.363 response: 00:08:40.363 { 00:08:40.363 "code": -32602, 00:08:40.363 "message": "Invalid MN SPDK_Controller\u001f" 00:08:40.363 }' 00:08:40.363 13:37:43 -- target/invalid.sh@51 -- # [[ request: 00:08:40.363 { 00:08:40.363 "nqn": "nqn.2016-06.io.spdk:cnode5399", 00:08:40.363 "model_number": "SPDK_Controller\u001f", 00:08:40.363 "method": "nvmf_create_subsystem", 00:08:40.363 "req_id": 1 00:08:40.363 } 00:08:40.363 Got JSON-RPC error response 00:08:40.363 response: 00:08:40.363 { 00:08:40.363 "code": -32602, 00:08:40.363 "message": "Invalid MN SPDK_Controller\u001f" 00:08:40.363 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:40.363 13:37:43 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:40.363 13:37:43 -- target/invalid.sh@19 -- # local length=21 ll 00:08:40.363 13:37:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:40.363 13:37:43 -- target/invalid.sh@21 -- # local chars 00:08:40.363 13:37:43 -- target/invalid.sh@22 -- # local string 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 79 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+=O 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 112 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+=p 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 74 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+=J 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 60 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+='<' 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 40 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+='(' 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 36 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # string+='$' 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.363 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.363 13:37:43 -- target/invalid.sh@25 -- # printf %x 88 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=X 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 46 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=. 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 121 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=y 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 121 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=y 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 126 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+='~' 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 77 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=M 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 83 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=S 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 92 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+='\' 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 38 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+='&' 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 115 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=s 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 53 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=5 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 50 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=2 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 43 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=+ 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 114 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+=r 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # printf %x 124 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:40.364 13:37:43 -- target/invalid.sh@25 -- # string+='|' 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.364 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.364 13:37:43 -- target/invalid.sh@28 -- # [[ O == \- ]] 00:08:40.364 13:37:43 -- target/invalid.sh@31 -- # echo 'OpJ<($X.yy~MS\&s52+r|' 00:08:40.364 13:37:43 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OpJ<($X.yy~MS\&s52+r|' nqn.2016-06.io.spdk:cnode16303 00:08:40.929 [2024-04-18 13:37:43.430850] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16303: invalid serial number 'OpJ<($X.yy~MS\&s52+r|' 00:08:40.929 13:37:43 -- target/invalid.sh@54 -- # out='request: 00:08:40.929 { 00:08:40.929 "nqn": "nqn.2016-06.io.spdk:cnode16303", 00:08:40.929 "serial_number": "OpJ<($X.yy~MS\\&s52+r|", 00:08:40.929 "method": "nvmf_create_subsystem", 00:08:40.929 "req_id": 1 00:08:40.929 } 00:08:40.929 Got JSON-RPC error response 00:08:40.929 response: 00:08:40.929 { 00:08:40.929 "code": -32602, 00:08:40.929 "message": "Invalid SN OpJ<($X.yy~MS\\&s52+r|" 00:08:40.929 }' 00:08:40.929 13:37:43 -- target/invalid.sh@55 -- # [[ request: 00:08:40.929 { 00:08:40.929 "nqn": "nqn.2016-06.io.spdk:cnode16303", 00:08:40.929 "serial_number": "OpJ<($X.yy~MS\\&s52+r|", 00:08:40.929 "method": "nvmf_create_subsystem", 00:08:40.929 "req_id": 1 00:08:40.929 } 00:08:40.929 Got JSON-RPC error response 00:08:40.929 response: 00:08:40.929 { 00:08:40.929 "code": -32602, 00:08:40.929 "message": "Invalid SN OpJ<($X.yy~MS\\&s52+r|" 00:08:40.929 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:40.929 13:37:43 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:40.929 13:37:43 -- target/invalid.sh@19 -- # local length=41 ll 00:08:40.929 13:37:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:40.929 13:37:43 -- target/invalid.sh@21 -- # local chars 00:08:40.929 13:37:43 -- target/invalid.sh@22 -- # local string 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 62 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='>' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 39 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=\' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 37 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=% 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 61 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+== 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 72 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=H 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 99 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=c 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 90 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=Z 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 100 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=d 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 65 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=A 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 125 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='}' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 81 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=Q 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 35 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='#' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 107 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=k 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 90 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=Z 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 110 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=n 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 69 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=E 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 86 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=V 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 99 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=c 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 48 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=0 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 91 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='[' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 102 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=f 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 65 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=A 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 105 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=i 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 116 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=t 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 32 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+=' ' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 60 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='<' 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.929 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # printf %x 63 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:40.929 13:37:43 -- target/invalid.sh@25 -- # string+='?' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 40 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+='(' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 98 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=b 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 120 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=x 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 40 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+='(' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 94 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+='^' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 40 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+='(' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 51 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=3 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 114 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=r 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 118 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=v 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 46 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=. 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 68 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=D 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 113 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=q 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 36 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+='$' 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # printf %x 51 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:40.930 13:37:43 -- target/invalid.sh@25 -- # string+=3 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.930 13:37:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.930 13:37:43 -- target/invalid.sh@28 -- # [[ > == \- ]] 00:08:40.930 13:37:43 -- target/invalid.sh@31 -- # echo '>'\''%=HcZdA}Q#kZnEVc0[fAit '\''%=HcZdA}Q#kZnEVc0[fAit '%=HcZdA}Q#kZnEVc0[fAit '\''%=HcZdA}Q#kZnEVc0[fAit '\''%=HcZdA}Q#kZnEVc0[fAit '%=HcZdA}Q#kZnEVc0[fAit '%=HcZdA}Q#kZnEVc0[fAit /dev/null' 00:08:44.687 13:37:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.687 13:37:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:44.687 13:37:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:44.687 13:37:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.687 13:37:47 -- common/autotest_common.sh@10 -- # set +x 00:08:47.998 13:37:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:47.998 13:37:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.998 13:37:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.998 13:37:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.998 13:37:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.998 13:37:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.998 13:37:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.998 13:37:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.998 13:37:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.998 13:37:50 -- nvmf/common.sh@296 -- # e810=() 00:08:47.998 13:37:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.998 13:37:50 -- nvmf/common.sh@297 -- # x722=() 00:08:47.998 13:37:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.998 13:37:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:47.999 13:37:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.999 13:37:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.999 13:37:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:08:47.999 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:08:47.999 13:37:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.999 13:37:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:08:47.999 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:08:47.999 13:37:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.999 13:37:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.999 13:37:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.999 13:37:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:08:47.999 Found net devices under 0000:81:00.0: mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.999 13:37:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.999 13:37:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:08:47.999 Found net devices under 0000:81:00.1: mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.999 13:37:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:47.999 13:37:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:47.999 13:37:50 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:47.999 13:37:50 -- nvmf/common.sh@58 -- # uname 00:08:47.999 13:37:50 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:47.999 13:37:50 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:47.999 13:37:50 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:47.999 13:37:50 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:47.999 13:37:50 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:47.999 13:37:50 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:47.999 13:37:50 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:47.999 13:37:50 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:47.999 13:37:50 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:47.999 13:37:50 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.999 13:37:50 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:47.999 13:37:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.999 13:37:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.999 13:37:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.999 13:37:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.999 13:37:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@105 -- # continue 2 00:08:47.999 13:37:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@105 -- # continue 2 00:08:47.999 13:37:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.999 13:37:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.999 13:37:50 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:47.999 13:37:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:47.999 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.999 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:08:47.999 altname enp129s0f0np0 00:08:47.999 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.999 valid_lft forever preferred_lft forever 00:08:47.999 13:37:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.999 13:37:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.999 13:37:50 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:47.999 13:37:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:47.999 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.999 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:08:47.999 altname enp129s0f1np1 00:08:47.999 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.999 valid_lft forever preferred_lft forever 00:08:47.999 13:37:50 -- nvmf/common.sh@411 -- # return 0 00:08:47.999 13:37:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:47.999 13:37:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.999 13:37:50 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:47.999 13:37:50 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:47.999 13:37:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.999 13:37:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.999 13:37:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.999 13:37:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.999 13:37:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.999 13:37:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@105 -- # continue 2 00:08:47.999 13:37:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.999 13:37:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.999 13:37:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@105 -- # continue 2 00:08:47.999 13:37:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.999 13:37:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.999 13:37:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.999 13:37:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.999 13:37:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.999 13:37:50 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.999 192.168.100.9' 00:08:47.999 13:37:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:47.999 192.168.100.9' 00:08:47.999 13:37:50 -- nvmf/common.sh@446 -- # head -n 1 00:08:47.999 13:37:50 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.999 13:37:50 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:47.999 192.168.100.9' 00:08:47.999 13:37:50 -- nvmf/common.sh@447 -- # tail -n +2 00:08:47.999 13:37:50 -- nvmf/common.sh@447 -- # head -n 1 00:08:47.999 13:37:50 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.999 13:37:50 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:47.999 13:37:50 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.999 13:37:50 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:47.999 13:37:50 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:47.999 13:37:50 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:47.999 13:37:50 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:47.999 13:37:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:47.999 13:37:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:47.999 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.999 13:37:50 -- nvmf/common.sh@470 -- # nvmfpid=1066653 00:08:47.999 13:37:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:47.999 13:37:50 -- nvmf/common.sh@471 -- # waitforlisten 1066653 00:08:47.999 13:37:50 -- common/autotest_common.sh@817 -- # '[' -z 1066653 ']' 00:08:47.999 13:37:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.999 13:37:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:47.999 13:37:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.999 13:37:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:47.999 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.999 [2024-04-18 13:37:50.354572] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:47.999 [2024-04-18 13:37:50.354651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.999 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.999 [2024-04-18 13:37:50.431953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:47.999 [2024-04-18 13:37:50.552368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.999 [2024-04-18 13:37:50.552432] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.999 [2024-04-18 13:37:50.552449] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.999 [2024-04-18 13:37:50.552463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.999 [2024-04-18 13:37:50.552475] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.999 [2024-04-18 13:37:50.552577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.999 [2024-04-18 13:37:50.552632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.999 [2024-04-18 13:37:50.552635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.999 13:37:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:47.999 13:37:50 -- common/autotest_common.sh@850 -- # return 0 00:08:47.999 13:37:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:47.999 13:37:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:47.999 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.999 13:37:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.999 13:37:50 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:47.999 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.999 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.999 [2024-04-18 13:37:50.735044] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23167d0/0x231acc0) succeed. 00:08:47.999 [2024-04-18 13:37:50.747376] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2317d20/0x235c350) succeed. 00:08:48.257 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.257 13:37:50 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:48.257 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.257 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.257 Malloc0 00:08:48.257 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.257 13:37:50 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:48.257 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.257 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.257 Delay0 00:08:48.257 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.257 13:37:50 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:48.257 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.257 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.257 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.257 13:37:50 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:48.257 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.257 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.257 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.257 13:37:50 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:48.257 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.258 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 [2024-04-18 13:37:50.940125] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:48.258 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.258 13:37:50 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:48.258 13:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.258 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:08:48.258 13:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.258 13:37:50 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:48.258 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.258 [2024-04-18 13:37:51.032914] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:50.785 Initializing NVMe Controllers 00:08:50.785 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.785 controller IO queue size 128 less than required 00:08:50.785 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:50.785 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:50.785 Initialization complete. Launching workers. 00:08:50.785 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36282 00:08:50.785 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36343, failed to submit 62 00:08:50.785 success 36283, unsuccess 60, failed 0 00:08:50.785 13:37:53 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.785 13:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.785 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 13:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.785 13:37:53 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:50.785 13:37:53 -- target/abort.sh@38 -- # nvmftestfini 00:08:50.785 13:37:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:50.785 13:37:53 -- nvmf/common.sh@117 -- # sync 00:08:50.785 13:37:53 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:50.785 13:37:53 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:50.785 13:37:53 -- nvmf/common.sh@120 -- # set +e 00:08:50.785 13:37:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.785 13:37:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:50.785 rmmod nvme_rdma 00:08:50.785 rmmod nvme_fabrics 00:08:50.785 13:37:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.785 13:37:53 -- nvmf/common.sh@124 -- # set -e 00:08:50.785 13:37:53 -- nvmf/common.sh@125 -- # return 0 00:08:50.785 13:37:53 -- nvmf/common.sh@478 -- # '[' -n 1066653 ']' 00:08:50.785 13:37:53 -- nvmf/common.sh@479 -- # killprocess 1066653 00:08:50.785 13:37:53 -- common/autotest_common.sh@936 -- # '[' -z 1066653 ']' 00:08:50.785 13:37:53 -- common/autotest_common.sh@940 -- # kill -0 1066653 00:08:50.785 13:37:53 -- common/autotest_common.sh@941 -- # uname 00:08:50.785 13:37:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.785 13:37:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1066653 00:08:50.785 13:37:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:50.785 13:37:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:50.785 13:37:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1066653' 00:08:50.785 killing process with pid 1066653 00:08:50.785 13:37:53 -- common/autotest_common.sh@955 -- # kill 1066653 00:08:50.785 13:37:53 -- common/autotest_common.sh@960 -- # wait 1066653 00:08:51.042 13:37:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:51.042 13:37:53 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:51.042 00:08:51.042 real 0m6.279s 00:08:51.042 user 0m12.136s 00:08:51.042 sys 0m2.464s 00:08:51.042 13:37:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:51.042 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:08:51.042 ************************************ 00:08:51.042 END TEST nvmf_abort 00:08:51.042 ************************************ 00:08:51.042 13:37:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:51.042 13:37:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.042 13:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.042 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:08:51.042 ************************************ 00:08:51.042 START TEST nvmf_ns_hotplug_stress 00:08:51.042 ************************************ 00:08:51.042 13:37:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:51.042 * Looking for test storage... 00:08:51.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:51.300 13:37:53 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.300 13:37:53 -- nvmf/common.sh@7 -- # uname -s 00:08:51.300 13:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.300 13:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.300 13:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.300 13:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.300 13:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.300 13:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.300 13:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.300 13:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.300 13:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.300 13:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.300 13:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:08:51.300 13:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:08:51.300 13:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.300 13:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.300 13:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.300 13:37:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.300 13:37:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.300 13:37:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.300 13:37:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.300 13:37:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.300 13:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.300 13:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.300 13:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.301 13:37:53 -- paths/export.sh@5 -- # export PATH 00:08:51.301 13:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.301 13:37:53 -- nvmf/common.sh@47 -- # : 0 00:08:51.301 13:37:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.301 13:37:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.301 13:37:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.301 13:37:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.301 13:37:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.301 13:37:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.301 13:37:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.301 13:37:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.301 13:37:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:51.301 13:37:53 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:51.301 13:37:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:51.301 13:37:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.301 13:37:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:51.301 13:37:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:51.301 13:37:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:51.301 13:37:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.301 13:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.301 13:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.301 13:37:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:51.301 13:37:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:51.301 13:37:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.301 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:08:53.828 13:37:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:53.828 13:37:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:53.828 13:37:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:53.828 13:37:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:53.828 13:37:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:53.828 13:37:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:53.828 13:37:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:53.828 13:37:56 -- nvmf/common.sh@295 -- # net_devs=() 00:08:53.828 13:37:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:53.828 13:37:56 -- nvmf/common.sh@296 -- # e810=() 00:08:53.828 13:37:56 -- nvmf/common.sh@296 -- # local -ga e810 00:08:53.828 13:37:56 -- nvmf/common.sh@297 -- # x722=() 00:08:53.828 13:37:56 -- nvmf/common.sh@297 -- # local -ga x722 00:08:53.828 13:37:56 -- nvmf/common.sh@298 -- # mlx=() 00:08:53.828 13:37:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:53.828 13:37:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.828 13:37:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:53.828 13:37:56 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:53.828 13:37:56 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:53.828 13:37:56 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:53.828 13:37:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:53.828 13:37:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.828 13:37:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:08:53.828 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:08:53.828 13:37:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:53.828 13:37:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.828 13:37:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:08:53.828 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:08:53.828 13:37:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:53.828 13:37:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:53.828 13:37:56 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:53.828 13:37:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.828 13:37:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.828 13:37:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:53.828 13:37:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.828 13:37:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:08:53.828 Found net devices under 0000:81:00.0: mlx_0_0 00:08:53.828 13:37:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.828 13:37:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.828 13:37:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.828 13:37:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:53.828 13:37:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.828 13:37:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:08:53.828 Found net devices under 0000:81:00.1: mlx_0_1 00:08:53.828 13:37:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.829 13:37:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:53.829 13:37:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:53.829 13:37:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:53.829 13:37:56 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:53.829 13:37:56 -- nvmf/common.sh@58 -- # uname 00:08:53.829 13:37:56 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:53.829 13:37:56 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:53.829 13:37:56 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:53.829 13:37:56 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:53.829 13:37:56 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:53.829 13:37:56 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:53.829 13:37:56 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:53.829 13:37:56 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:53.829 13:37:56 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:53.829 13:37:56 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:53.829 13:37:56 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:53.829 13:37:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:53.829 13:37:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:53.829 13:37:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:53.829 13:37:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:53.829 13:37:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:53.829 13:37:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:53.829 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.829 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:53.829 13:37:56 -- nvmf/common.sh@105 -- # continue 2 00:08:53.829 13:37:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:53.829 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.829 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.829 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:53.829 13:37:56 -- nvmf/common.sh@105 -- # continue 2 00:08:53.829 13:37:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:53.829 13:37:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:53.829 13:37:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:53.829 13:37:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:53.829 13:37:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:53.829 13:37:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:53.829 13:37:56 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:53.829 13:37:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:53.829 13:37:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:53.829 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:53.829 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:08:53.829 altname enp129s0f0np0 00:08:53.829 inet 192.168.100.8/24 scope global mlx_0_0 00:08:53.829 valid_lft forever preferred_lft forever 00:08:54.087 13:37:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:54.087 13:37:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.087 13:37:56 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:54.087 13:37:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:54.087 13:37:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:54.087 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.087 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:08:54.087 altname enp129s0f1np1 00:08:54.087 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.087 valid_lft forever preferred_lft forever 00:08:54.087 13:37:56 -- nvmf/common.sh@411 -- # return 0 00:08:54.087 13:37:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.087 13:37:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.087 13:37:56 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:54.087 13:37:56 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:54.087 13:37:56 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:54.087 13:37:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.087 13:37:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:54.087 13:37:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:54.087 13:37:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.087 13:37:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:54.087 13:37:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.087 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.087 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.087 13:37:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:54.087 13:37:56 -- nvmf/common.sh@105 -- # continue 2 00:08:54.087 13:37:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.087 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.087 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.087 13:37:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.087 13:37:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.087 13:37:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@105 -- # continue 2 00:08:54.087 13:37:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.087 13:37:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:54.087 13:37:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.087 13:37:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.087 13:37:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.087 13:37:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.087 13:37:56 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.087 192.168.100.9' 00:08:54.087 13:37:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:54.087 192.168.100.9' 00:08:54.087 13:37:56 -- nvmf/common.sh@446 -- # head -n 1 00:08:54.087 13:37:56 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.087 13:37:56 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:54.087 192.168.100.9' 00:08:54.087 13:37:56 -- nvmf/common.sh@447 -- # tail -n +2 00:08:54.087 13:37:56 -- nvmf/common.sh@447 -- # head -n 1 00:08:54.087 13:37:56 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.087 13:37:56 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:54.087 13:37:56 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.087 13:37:56 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:54.087 13:37:56 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:54.087 13:37:56 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:54.087 13:37:56 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:08:54.087 13:37:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.087 13:37:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.087 13:37:56 -- common/autotest_common.sh@10 -- # set +x 00:08:54.087 13:37:56 -- nvmf/common.sh@470 -- # nvmfpid=1069136 00:08:54.087 13:37:56 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:54.087 13:37:56 -- nvmf/common.sh@471 -- # waitforlisten 1069136 00:08:54.087 13:37:56 -- common/autotest_common.sh@817 -- # '[' -z 1069136 ']' 00:08:54.087 13:37:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.087 13:37:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.087 13:37:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.087 13:37:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.087 13:37:56 -- common/autotest_common.sh@10 -- # set +x 00:08:54.087 [2024-04-18 13:37:56.761991] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:54.087 [2024-04-18 13:37:56.762086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.087 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.087 [2024-04-18 13:37:56.844559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.345 [2024-04-18 13:37:56.970571] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.345 [2024-04-18 13:37:56.970635] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.345 [2024-04-18 13:37:56.970651] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.345 [2024-04-18 13:37:56.970665] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.345 [2024-04-18 13:37:56.970677] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.345 [2024-04-18 13:37:56.970771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.345 [2024-04-18 13:37:56.970810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.345 [2024-04-18 13:37:56.970812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.345 13:37:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:54.345 13:37:57 -- common/autotest_common.sh@850 -- # return 0 00:08:54.345 13:37:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:54.345 13:37:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:54.345 13:37:57 -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 13:37:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.345 13:37:57 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:08:54.345 13:37:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.910 [2024-04-18 13:37:57.633458] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x81f7d0/0x823cc0) succeed. 00:08:54.910 [2024-04-18 13:37:57.645725] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x820d20/0x865350) succeed. 00:08:55.168 13:37:57 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.426 13:37:58 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:55.989 [2024-04-18 13:37:58.585380] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:55.989 13:37:58 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:56.556 13:37:59 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:56.814 Malloc0 00:08:56.814 13:37:59 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.378 Delay0 00:08:57.378 13:37:59 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.635 13:38:00 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:57.892 NULL1 00:08:57.892 13:38:00 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:58.149 13:38:00 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1069685 00:08:58.149 13:38:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:58.149 13:38:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:08:58.149 13:38:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.149 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.520 Read completed with error (sct=0, sc=11) 00:08:59.520 13:38:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.777 13:38:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:08:59.777 13:38:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:00.034 true 00:09:00.034 13:38:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:00.034 13:38:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.598 13:38:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.856 13:38:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:00.856 13:38:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:01.421 true 00:09:01.421 13:38:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:01.421 13:38:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.985 13:38:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.242 13:38:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:02.242 13:38:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:02.501 true 00:09:02.501 13:38:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:02.501 13:38:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.495 13:38:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.495 13:38:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:03.495 13:38:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:04.060 true 00:09:04.060 13:38:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:04.060 13:38:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.317 13:38:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.574 13:38:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:04.574 13:38:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:04.831 true 00:09:04.831 13:38:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:04.831 13:38:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.087 13:38:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.651 13:38:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:05.651 13:38:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:05.909 true 00:09:05.909 13:38:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:05.909 13:38:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.166 13:38:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.423 13:38:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:06.423 13:38:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:06.987 true 00:09:06.987 13:38:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:06.987 13:38:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.244 13:38:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.807 13:38:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:07.807 13:38:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:08.074 true 00:09:08.074 13:38:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:08.074 13:38:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.466 13:38:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.723 13:38:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:09.723 13:38:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:09.980 true 00:09:09.980 13:38:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:09.980 13:38:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 13:38:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.168 13:38:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:11.168 13:38:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:11.425 true 00:09:11.425 13:38:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:11.425 13:38:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 13:38:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.613 13:38:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:12.613 13:38:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:12.870 true 00:09:12.870 13:38:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:12.870 13:38:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 13:38:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.059 13:38:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:14.059 13:38:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:14.316 true 00:09:14.316 13:38:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:14.316 13:38:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 13:38:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.248 13:38:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:15.248 13:38:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:15.812 true 00:09:15.812 13:38:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:15.812 13:38:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 13:38:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.652 13:38:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:16.652 13:38:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:16.922 true 00:09:16.922 13:38:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:16.922 13:38:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 13:38:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.110 13:38:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:18.110 13:38:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:18.675 true 00:09:18.675 13:38:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:18.675 13:38:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.932 13:38:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.446 13:38:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:19.446 13:38:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:19.703 true 00:09:19.703 13:38:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:19.703 13:38:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.267 13:38:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.787 13:38:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:20.787 13:38:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:21.045 true 00:09:21.045 13:38:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:21.045 13:38:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.609 13:38:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.865 13:38:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:21.865 13:38:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:22.122 true 00:09:22.122 13:38:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:22.122 13:38:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.492 13:38:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.006 13:38:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:24.006 13:38:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:24.263 true 00:09:24.263 13:38:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:24.263 13:38:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.828 13:38:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.343 13:38:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:25.343 13:38:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:25.600 true 00:09:25.600 13:38:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:25.600 13:38:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.165 13:38:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.680 13:38:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:09:26.680 13:38:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:26.938 true 00:09:26.938 13:38:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:26.938 13:38:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.503 13:38:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.068 13:38:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:09:28.068 13:38:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:28.325 true 00:09:28.325 13:38:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:28.325 13:38:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.582 13:38:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.840 13:38:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:09:28.840 13:38:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:29.097 true 00:09:29.097 13:38:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:29.097 13:38:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.662 13:38:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.918 13:38:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:09:29.918 13:38:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:30.516 true 00:09:30.516 13:38:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:30.516 13:38:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.516 Initializing NVMe Controllers 00:09:30.516 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.516 Controller IO queue size 128, less than required. 00:09:30.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:30.516 Controller IO queue size 128, less than required. 00:09:30.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:30.516 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.516 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:30.516 Initialization complete. Launching workers. 00:09:30.516 ======================================================== 00:09:30.516 Latency(us) 00:09:30.516 Device Information : IOPS MiB/s Average min max 00:09:30.516 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5991.56 2.93 14610.57 1140.21 1171524.04 00:09:30.516 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 20208.53 9.87 6333.99 2054.22 401581.18 00:09:30.516 ======================================================== 00:09:30.516 Total : 26200.09 12.79 8226.71 1140.21 1171524.04 00:09:30.516 00:09:31.079 13:38:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.337 13:38:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:09:31.337 13:38:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:31.594 true 00:09:31.594 13:38:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1069685 00:09:31.594 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1069685) - No such process 00:09:31.594 13:38:34 -- target/ns_hotplug_stress.sh@44 -- # wait 1069685 00:09:31.594 13:38:34 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:31.594 13:38:34 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:31.594 13:38:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:31.594 13:38:34 -- nvmf/common.sh@117 -- # sync 00:09:31.594 13:38:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:31.594 13:38:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:31.594 13:38:34 -- nvmf/common.sh@120 -- # set +e 00:09:31.594 13:38:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.594 13:38:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:31.594 rmmod nvme_rdma 00:09:31.594 rmmod nvme_fabrics 00:09:31.594 13:38:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.594 13:38:34 -- nvmf/common.sh@124 -- # set -e 00:09:31.594 13:38:34 -- nvmf/common.sh@125 -- # return 0 00:09:31.594 13:38:34 -- nvmf/common.sh@478 -- # '[' -n 1069136 ']' 00:09:31.594 13:38:34 -- nvmf/common.sh@479 -- # killprocess 1069136 00:09:31.594 13:38:34 -- common/autotest_common.sh@936 -- # '[' -z 1069136 ']' 00:09:31.594 13:38:34 -- common/autotest_common.sh@940 -- # kill -0 1069136 00:09:31.594 13:38:34 -- common/autotest_common.sh@941 -- # uname 00:09:31.594 13:38:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.594 13:38:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1069136 00:09:31.594 13:38:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:31.594 13:38:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:31.594 13:38:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1069136' 00:09:31.594 killing process with pid 1069136 00:09:31.594 13:38:34 -- common/autotest_common.sh@955 -- # kill 1069136 00:09:31.594 13:38:34 -- common/autotest_common.sh@960 -- # wait 1069136 00:09:32.160 13:38:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:32.160 13:38:34 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:32.160 00:09:32.160 real 0m40.937s 00:09:32.160 user 2m51.119s 00:09:32.160 sys 0m6.637s 00:09:32.160 13:38:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.160 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:09:32.160 ************************************ 00:09:32.160 END TEST nvmf_ns_hotplug_stress 00:09:32.160 ************************************ 00:09:32.160 13:38:34 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:32.160 13:38:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:32.160 13:38:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.160 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:09:32.160 ************************************ 00:09:32.160 START TEST nvmf_connect_stress 00:09:32.160 ************************************ 00:09:32.160 13:38:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:32.160 * Looking for test storage... 00:09:32.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:32.160 13:38:34 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.160 13:38:34 -- nvmf/common.sh@7 -- # uname -s 00:09:32.160 13:38:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.160 13:38:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.160 13:38:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.160 13:38:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.160 13:38:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.160 13:38:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.160 13:38:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.160 13:38:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.160 13:38:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.160 13:38:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.160 13:38:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:09:32.160 13:38:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:09:32.160 13:38:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.160 13:38:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.160 13:38:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.160 13:38:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.160 13:38:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:32.160 13:38:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.160 13:38:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.160 13:38:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.160 13:38:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.160 13:38:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.160 13:38:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.160 13:38:34 -- paths/export.sh@5 -- # export PATH 00:09:32.160 13:38:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.160 13:38:34 -- nvmf/common.sh@47 -- # : 0 00:09:32.160 13:38:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.160 13:38:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.160 13:38:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.160 13:38:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.160 13:38:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.160 13:38:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.160 13:38:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.160 13:38:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.160 13:38:34 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:32.161 13:38:34 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:32.161 13:38:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.161 13:38:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:32.161 13:38:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:32.161 13:38:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:32.161 13:38:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.161 13:38:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.161 13:38:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.161 13:38:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:32.161 13:38:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:32.161 13:38:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:32.161 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:09:35.443 13:38:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:35.443 13:38:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.443 13:38:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.443 13:38:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.443 13:38:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.443 13:38:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.443 13:38:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.443 13:38:37 -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.443 13:38:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.443 13:38:37 -- nvmf/common.sh@296 -- # e810=() 00:09:35.443 13:38:37 -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.443 13:38:37 -- nvmf/common.sh@297 -- # x722=() 00:09:35.443 13:38:37 -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.443 13:38:37 -- nvmf/common.sh@298 -- # mlx=() 00:09:35.443 13:38:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.443 13:38:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.443 13:38:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.443 13:38:37 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:35.443 13:38:37 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:35.443 13:38:37 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:35.443 13:38:37 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:35.443 13:38:37 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:35.443 13:38:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.443 13:38:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.443 13:38:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:09:35.443 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:09:35.443 13:38:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.443 13:38:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.444 13:38:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:09:35.444 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:09:35.444 13:38:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.444 13:38:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.444 13:38:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.444 13:38:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:09:35.444 Found net devices under 0000:81:00.0: mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.444 13:38:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.444 13:38:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.444 13:38:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:09:35.444 Found net devices under 0000:81:00.1: mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.444 13:38:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:35.444 13:38:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:35.444 13:38:37 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:35.444 13:38:37 -- nvmf/common.sh@58 -- # uname 00:09:35.444 13:38:37 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:35.444 13:38:37 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:35.444 13:38:37 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:35.444 13:38:37 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:35.444 13:38:37 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:35.444 13:38:37 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:35.444 13:38:37 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:35.444 13:38:37 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:35.444 13:38:37 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:35.444 13:38:37 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:35.444 13:38:37 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:35.444 13:38:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.444 13:38:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.444 13:38:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.444 13:38:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.444 13:38:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@105 -- # continue 2 00:09:35.444 13:38:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@105 -- # continue 2 00:09:35.444 13:38:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.444 13:38:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.444 13:38:37 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:35.444 13:38:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:35.444 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.444 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:09:35.444 altname enp129s0f0np0 00:09:35.444 inet 192.168.100.8/24 scope global mlx_0_0 00:09:35.444 valid_lft forever preferred_lft forever 00:09:35.444 13:38:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.444 13:38:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.444 13:38:37 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:35.444 13:38:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:35.444 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.444 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:09:35.444 altname enp129s0f1np1 00:09:35.444 inet 192.168.100.9/24 scope global mlx_0_1 00:09:35.444 valid_lft forever preferred_lft forever 00:09:35.444 13:38:37 -- nvmf/common.sh@411 -- # return 0 00:09:35.444 13:38:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:35.444 13:38:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.444 13:38:37 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:35.444 13:38:37 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:35.444 13:38:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.444 13:38:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.444 13:38:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.444 13:38:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.444 13:38:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.444 13:38:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@105 -- # continue 2 00:09:35.444 13:38:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.444 13:38:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.444 13:38:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@105 -- # continue 2 00:09:35.444 13:38:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.444 13:38:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.444 13:38:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.444 13:38:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.444 13:38:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.444 13:38:37 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:35.444 192.168.100.9' 00:09:35.444 13:38:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:35.444 192.168.100.9' 00:09:35.444 13:38:37 -- nvmf/common.sh@446 -- # head -n 1 00:09:35.444 13:38:37 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:35.444 13:38:37 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:35.444 192.168.100.9' 00:09:35.444 13:38:37 -- nvmf/common.sh@447 -- # tail -n +2 00:09:35.444 13:38:37 -- nvmf/common.sh@447 -- # head -n 1 00:09:35.444 13:38:37 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:35.444 13:38:37 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:35.444 13:38:37 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:35.444 13:38:37 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:35.444 13:38:37 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:35.444 13:38:37 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:35.444 13:38:37 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:35.444 13:38:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:35.444 13:38:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:35.444 13:38:37 -- common/autotest_common.sh@10 -- # set +x 00:09:35.444 13:38:37 -- nvmf/common.sh@470 -- # nvmfpid=1075675 00:09:35.444 13:38:37 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:35.444 13:38:37 -- nvmf/common.sh@471 -- # waitforlisten 1075675 00:09:35.444 13:38:37 -- common/autotest_common.sh@817 -- # '[' -z 1075675 ']' 00:09:35.444 13:38:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.444 13:38:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.444 13:38:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.444 13:38:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.444 13:38:37 -- common/autotest_common.sh@10 -- # set +x 00:09:35.444 [2024-04-18 13:38:37.757208] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:35.445 [2024-04-18 13:38:37.757322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.445 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.445 [2024-04-18 13:38:37.853018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.445 [2024-04-18 13:38:37.987928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.445 [2024-04-18 13:38:37.988004] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.445 [2024-04-18 13:38:37.988021] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.445 [2024-04-18 13:38:37.988035] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.445 [2024-04-18 13:38:37.988047] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.445 [2024-04-18 13:38:37.988151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.445 [2024-04-18 13:38:37.988206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.445 [2024-04-18 13:38:37.988210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.445 13:38:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.445 13:38:38 -- common/autotest_common.sh@850 -- # return 0 00:09:35.445 13:38:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:35.445 13:38:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:35.445 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.445 13:38:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.445 13:38:38 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:35.445 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.445 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.445 [2024-04-18 13:38:38.159627] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x239b7d0/0x239fcc0) succeed. 00:09:35.445 [2024-04-18 13:38:38.171677] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x239cd20/0x23e1350) succeed. 00:09:35.704 13:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.704 13:38:38 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.704 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.704 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 13:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.704 13:38:38 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:35.704 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.704 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 [2024-04-18 13:38:38.310064] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:35.704 13:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.704 13:38:38 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:35.704 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.704 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 NULL1 00:09:35.704 13:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.704 13:38:38 -- target/connect_stress.sh@21 -- # PERF_PID=1075708 00:09:35.704 13:38:38 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:35.704 13:38:38 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.704 13:38:38 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.704 13:38:38 -- target/connect_stress.sh@28 -- # cat 00:09:35.704 13:38:38 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:35.704 13:38:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.704 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.704 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:35.962 13:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.962 13:38:38 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:35.962 13:38:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.962 13:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.962 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:09:36.527 13:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.527 13:38:39 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:36.527 13:38:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.527 13:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.527 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:09:36.785 13:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.785 13:38:39 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:36.785 13:38:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.785 13:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.785 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.043 13:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.043 13:38:39 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:37.043 13:38:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.043 13:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.043 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 13:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.300 13:38:39 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:37.300 13:38:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.300 13:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.300 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.558 13:38:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.558 13:38:40 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:37.558 13:38:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.558 13:38:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.558 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.122 13:38:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.122 13:38:40 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:38.122 13:38:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.122 13:38:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.122 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.380 13:38:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.380 13:38:40 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:38.380 13:38:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.380 13:38:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.380 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.637 13:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.637 13:38:41 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:38.637 13:38:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.637 13:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.637 13:38:41 -- common/autotest_common.sh@10 -- # set +x 00:09:38.895 13:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.895 13:38:41 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:38.895 13:38:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.895 13:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.895 13:38:41 -- common/autotest_common.sh@10 -- # set +x 00:09:39.151 13:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:39.151 13:38:41 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:39.151 13:38:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.151 13:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:39.151 13:38:41 -- common/autotest_common.sh@10 -- # set +x 00:09:39.715 13:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:39.715 13:38:42 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:39.715 13:38:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.715 13:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:39.715 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:09:39.972 13:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:39.972 13:38:42 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:39.972 13:38:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.972 13:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:39.972 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:09:40.228 13:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.228 13:38:42 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:40.228 13:38:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.228 13:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.228 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:09:40.485 13:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.485 13:38:43 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:40.485 13:38:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.485 13:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.485 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:09:41.048 13:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.048 13:38:43 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:41.048 13:38:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.048 13:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.048 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:09:41.305 13:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.305 13:38:43 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:41.305 13:38:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.305 13:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.305 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:09:41.562 13:38:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.562 13:38:44 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:41.562 13:38:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.562 13:38:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.562 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:09:41.819 13:38:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.819 13:38:44 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:41.819 13:38:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.819 13:38:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.819 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:09:42.076 13:38:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.076 13:38:44 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:42.076 13:38:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.076 13:38:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.076 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:09:42.640 13:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.640 13:38:45 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:42.640 13:38:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.640 13:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.640 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:42.897 13:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.897 13:38:45 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:42.897 13:38:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.897 13:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.897 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 13:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.154 13:38:45 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:43.154 13:38:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.154 13:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.154 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:43.411 13:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.411 13:38:46 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:43.411 13:38:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.411 13:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.411 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:09:43.668 13:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.668 13:38:46 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:43.668 13:38:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.668 13:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.668 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:09:44.232 13:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.232 13:38:46 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:44.232 13:38:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.232 13:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.232 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:09:44.500 13:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.500 13:38:47 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:44.500 13:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.500 13:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.500 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:09:44.791 13:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.791 13:38:47 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:44.791 13:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.791 13:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.791 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:09:45.047 13:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.047 13:38:47 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:45.047 13:38:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.047 13:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.047 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 13:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.304 13:38:48 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:45.304 13:38:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.304 13:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.304 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 13:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.868 13:38:48 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:45.868 13:38:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.868 13:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.868 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.125 13:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.125 13:38:48 -- target/connect_stress.sh@34 -- # kill -0 1075708 00:09:46.125 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1075708) - No such process 00:09:46.125 13:38:48 -- target/connect_stress.sh@38 -- # wait 1075708 00:09:46.125 13:38:48 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:46.125 13:38:48 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:46.125 13:38:48 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:46.126 13:38:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:46.126 13:38:48 -- nvmf/common.sh@117 -- # sync 00:09:46.126 13:38:48 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:46.126 13:38:48 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:46.126 13:38:48 -- nvmf/common.sh@120 -- # set +e 00:09:46.126 13:38:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.126 13:38:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:46.126 rmmod nvme_rdma 00:09:46.126 rmmod nvme_fabrics 00:09:46.126 13:38:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.126 13:38:48 -- nvmf/common.sh@124 -- # set -e 00:09:46.126 13:38:48 -- nvmf/common.sh@125 -- # return 0 00:09:46.126 13:38:48 -- nvmf/common.sh@478 -- # '[' -n 1075675 ']' 00:09:46.126 13:38:48 -- nvmf/common.sh@479 -- # killprocess 1075675 00:09:46.126 13:38:48 -- common/autotest_common.sh@936 -- # '[' -z 1075675 ']' 00:09:46.126 13:38:48 -- common/autotest_common.sh@940 -- # kill -0 1075675 00:09:46.126 13:38:48 -- common/autotest_common.sh@941 -- # uname 00:09:46.126 13:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.126 13:38:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1075675 00:09:46.126 13:38:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:46.126 13:38:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:46.126 13:38:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1075675' 00:09:46.126 killing process with pid 1075675 00:09:46.126 13:38:48 -- common/autotest_common.sh@955 -- # kill 1075675 00:09:46.126 13:38:48 -- common/autotest_common.sh@960 -- # wait 1075675 00:09:46.383 13:38:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:46.383 13:38:49 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:46.383 00:09:46.383 real 0m14.294s 00:09:46.383 user 0m39.499s 00:09:46.383 sys 0m4.207s 00:09:46.383 13:38:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.383 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.383 ************************************ 00:09:46.383 END TEST nvmf_connect_stress 00:09:46.383 ************************************ 00:09:46.641 13:38:49 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:46.641 13:38:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.641 13:38:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.641 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.641 ************************************ 00:09:46.641 START TEST nvmf_fused_ordering 00:09:46.641 ************************************ 00:09:46.642 13:38:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:46.642 * Looking for test storage... 00:09:46.642 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:46.642 13:38:49 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.642 13:38:49 -- nvmf/common.sh@7 -- # uname -s 00:09:46.642 13:38:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.642 13:38:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.642 13:38:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.642 13:38:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.642 13:38:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.642 13:38:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.642 13:38:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.642 13:38:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.642 13:38:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.642 13:38:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.642 13:38:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:09:46.642 13:38:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:09:46.642 13:38:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.642 13:38:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.642 13:38:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.642 13:38:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.642 13:38:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:46.642 13:38:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.642 13:38:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.642 13:38:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.642 13:38:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.642 13:38:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.642 13:38:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.642 13:38:49 -- paths/export.sh@5 -- # export PATH 00:09:46.642 13:38:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.642 13:38:49 -- nvmf/common.sh@47 -- # : 0 00:09:46.642 13:38:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.642 13:38:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.642 13:38:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.642 13:38:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.642 13:38:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.642 13:38:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.642 13:38:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.642 13:38:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.642 13:38:49 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:46.642 13:38:49 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:46.642 13:38:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.642 13:38:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:46.642 13:38:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:46.642 13:38:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:46.642 13:38:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.642 13:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.642 13:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.642 13:38:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:46.642 13:38:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:46.642 13:38:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.642 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 13:38:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:49.921 13:38:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.921 13:38:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.921 13:38:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.921 13:38:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.921 13:38:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.921 13:38:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.921 13:38:52 -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.921 13:38:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.921 13:38:52 -- nvmf/common.sh@296 -- # e810=() 00:09:49.921 13:38:52 -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.921 13:38:52 -- nvmf/common.sh@297 -- # x722=() 00:09:49.921 13:38:52 -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.921 13:38:52 -- nvmf/common.sh@298 -- # mlx=() 00:09:49.921 13:38:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.921 13:38:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.921 13:38:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.921 13:38:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.921 13:38:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:09:49.921 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:09:49.921 13:38:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.921 13:38:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.921 13:38:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:09:49.921 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:09:49.921 13:38:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.921 13:38:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.921 13:38:52 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.921 13:38:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.921 13:38:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.921 13:38:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.921 13:38:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:09:49.921 Found net devices under 0000:81:00.0: mlx_0_0 00:09:49.921 13:38:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.921 13:38:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.921 13:38:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.921 13:38:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.921 13:38:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:09:49.921 Found net devices under 0000:81:00.1: mlx_0_1 00:09:49.921 13:38:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.921 13:38:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:49.921 13:38:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:49.921 13:38:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:49.921 13:38:52 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:49.921 13:38:52 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:49.921 13:38:52 -- nvmf/common.sh@58 -- # uname 00:09:49.921 13:38:52 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:49.921 13:38:52 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:49.921 13:38:52 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:49.921 13:38:52 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:49.921 13:38:52 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:49.921 13:38:52 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:49.921 13:38:52 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:49.921 13:38:52 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:49.922 13:38:52 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:49.922 13:38:52 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:49.922 13:38:52 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:49.922 13:38:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.922 13:38:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:49.922 13:38:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:49.922 13:38:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.922 13:38:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:49.922 13:38:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@105 -- # continue 2 00:09:49.922 13:38:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@105 -- # continue 2 00:09:49.922 13:38:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:49.922 13:38:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.922 13:38:52 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:49.922 13:38:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:49.922 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.922 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:09:49.922 altname enp129s0f0np0 00:09:49.922 inet 192.168.100.8/24 scope global mlx_0_0 00:09:49.922 valid_lft forever preferred_lft forever 00:09:49.922 13:38:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:49.922 13:38:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.922 13:38:52 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:49.922 13:38:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:49.922 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.922 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:09:49.922 altname enp129s0f1np1 00:09:49.922 inet 192.168.100.9/24 scope global mlx_0_1 00:09:49.922 valid_lft forever preferred_lft forever 00:09:49.922 13:38:52 -- nvmf/common.sh@411 -- # return 0 00:09:49.922 13:38:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:49.922 13:38:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:49.922 13:38:52 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:49.922 13:38:52 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:49.922 13:38:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.922 13:38:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:49.922 13:38:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:49.922 13:38:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.922 13:38:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:49.922 13:38:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@105 -- # continue 2 00:09:49.922 13:38:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.922 13:38:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.922 13:38:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@105 -- # continue 2 00:09:49.922 13:38:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:49.922 13:38:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.922 13:38:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:49.922 13:38:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.922 13:38:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.922 13:38:52 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:49.922 192.168.100.9' 00:09:49.922 13:38:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:49.922 192.168.100.9' 00:09:49.922 13:38:52 -- nvmf/common.sh@446 -- # head -n 1 00:09:49.922 13:38:52 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:49.922 13:38:52 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:49.922 192.168.100.9' 00:09:49.922 13:38:52 -- nvmf/common.sh@447 -- # tail -n +2 00:09:49.922 13:38:52 -- nvmf/common.sh@447 -- # head -n 1 00:09:49.922 13:38:52 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:49.922 13:38:52 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:49.922 13:38:52 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:49.922 13:38:52 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:49.922 13:38:52 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:49.922 13:38:52 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:49.922 13:38:52 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:49.922 13:38:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:49.922 13:38:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.922 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:49.922 13:38:52 -- nvmf/common.sh@470 -- # nvmfpid=1079111 00:09:49.922 13:38:52 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.922 13:38:52 -- nvmf/common.sh@471 -- # waitforlisten 1079111 00:09:49.922 13:38:52 -- common/autotest_common.sh@817 -- # '[' -z 1079111 ']' 00:09:49.922 13:38:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.922 13:38:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.922 13:38:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.922 13:38:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.922 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:49.922 [2024-04-18 13:38:52.366196] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:49.922 [2024-04-18 13:38:52.366298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.922 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.922 [2024-04-18 13:38:52.451455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.922 [2024-04-18 13:38:52.572509] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.922 [2024-04-18 13:38:52.572573] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.922 [2024-04-18 13:38:52.572590] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.922 [2024-04-18 13:38:52.572604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.922 [2024-04-18 13:38:52.572616] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.922 [2024-04-18 13:38:52.572650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.922 13:38:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:49.922 13:38:52 -- common/autotest_common.sh@850 -- # return 0 00:09:49.922 13:38:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:49.922 13:38:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:49.922 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 13:38:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.180 13:38:52 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 [2024-04-18 13:38:52.763598] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf64220/0xf68710) succeed. 00:09:50.180 [2024-04-18 13:38:52.778261] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf65720/0xfa9da0) succeed. 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 [2024-04-18 13:38:52.851731] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 NULL1 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:50.180 13:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.180 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:09:50.180 13:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.180 13:38:52 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:50.180 [2024-04-18 13:38:52.899317] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:50.180 [2024-04-18 13:38:52.899369] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079139 ] 00:09:50.180 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.438 Attached to nqn.2016-06.io.spdk:cnode1 00:09:50.438 Namespace ID: 1 size: 1GB 00:09:50.438 fused_ordering(0) 00:09:50.438 fused_ordering(1) 00:09:50.438 fused_ordering(2) 00:09:50.438 fused_ordering(3) 00:09:50.438 fused_ordering(4) 00:09:50.438 fused_ordering(5) 00:09:50.438 fused_ordering(6) 00:09:50.438 fused_ordering(7) 00:09:50.438 fused_ordering(8) 00:09:50.438 fused_ordering(9) 00:09:50.438 fused_ordering(10) 00:09:50.438 fused_ordering(11) 00:09:50.438 fused_ordering(12) 00:09:50.438 fused_ordering(13) 00:09:50.438 fused_ordering(14) 00:09:50.438 fused_ordering(15) 00:09:50.438 fused_ordering(16) 00:09:50.438 fused_ordering(17) 00:09:50.438 fused_ordering(18) 00:09:50.438 fused_ordering(19) 00:09:50.438 fused_ordering(20) 00:09:50.438 fused_ordering(21) 00:09:50.438 fused_ordering(22) 00:09:50.438 fused_ordering(23) 00:09:50.438 fused_ordering(24) 00:09:50.438 fused_ordering(25) 00:09:50.438 fused_ordering(26) 00:09:50.438 fused_ordering(27) 00:09:50.438 fused_ordering(28) 00:09:50.438 fused_ordering(29) 00:09:50.438 fused_ordering(30) 00:09:50.438 fused_ordering(31) 00:09:50.438 fused_ordering(32) 00:09:50.438 fused_ordering(33) 00:09:50.438 fused_ordering(34) 00:09:50.438 fused_ordering(35) 00:09:50.438 fused_ordering(36) 00:09:50.438 fused_ordering(37) 00:09:50.438 fused_ordering(38) 00:09:50.438 fused_ordering(39) 00:09:50.438 fused_ordering(40) 00:09:50.438 fused_ordering(41) 00:09:50.438 fused_ordering(42) 00:09:50.438 fused_ordering(43) 00:09:50.438 fused_ordering(44) 00:09:50.438 fused_ordering(45) 00:09:50.438 fused_ordering(46) 00:09:50.438 fused_ordering(47) 00:09:50.438 fused_ordering(48) 00:09:50.438 fused_ordering(49) 00:09:50.438 fused_ordering(50) 00:09:50.438 fused_ordering(51) 00:09:50.438 fused_ordering(52) 00:09:50.438 fused_ordering(53) 00:09:50.438 fused_ordering(54) 00:09:50.438 fused_ordering(55) 00:09:50.438 fused_ordering(56) 00:09:50.438 fused_ordering(57) 00:09:50.438 fused_ordering(58) 00:09:50.438 fused_ordering(59) 00:09:50.438 fused_ordering(60) 00:09:50.438 fused_ordering(61) 00:09:50.438 fused_ordering(62) 00:09:50.438 fused_ordering(63) 00:09:50.438 fused_ordering(64) 00:09:50.438 fused_ordering(65) 00:09:50.438 fused_ordering(66) 00:09:50.438 fused_ordering(67) 00:09:50.438 fused_ordering(68) 00:09:50.438 fused_ordering(69) 00:09:50.438 fused_ordering(70) 00:09:50.438 fused_ordering(71) 00:09:50.438 fused_ordering(72) 00:09:50.438 fused_ordering(73) 00:09:50.438 fused_ordering(74) 00:09:50.438 fused_ordering(75) 00:09:50.438 fused_ordering(76) 00:09:50.438 fused_ordering(77) 00:09:50.438 fused_ordering(78) 00:09:50.438 fused_ordering(79) 00:09:50.438 fused_ordering(80) 00:09:50.438 fused_ordering(81) 00:09:50.438 fused_ordering(82) 00:09:50.438 fused_ordering(83) 00:09:50.438 fused_ordering(84) 00:09:50.438 fused_ordering(85) 00:09:50.438 fused_ordering(86) 00:09:50.438 fused_ordering(87) 00:09:50.438 fused_ordering(88) 00:09:50.438 fused_ordering(89) 00:09:50.438 fused_ordering(90) 00:09:50.438 fused_ordering(91) 00:09:50.438 fused_ordering(92) 00:09:50.438 fused_ordering(93) 00:09:50.438 fused_ordering(94) 00:09:50.438 fused_ordering(95) 00:09:50.438 fused_ordering(96) 00:09:50.438 fused_ordering(97) 00:09:50.438 fused_ordering(98) 00:09:50.438 fused_ordering(99) 00:09:50.438 fused_ordering(100) 00:09:50.438 fused_ordering(101) 00:09:50.438 fused_ordering(102) 00:09:50.438 fused_ordering(103) 00:09:50.438 fused_ordering(104) 00:09:50.438 fused_ordering(105) 00:09:50.438 fused_ordering(106) 00:09:50.438 fused_ordering(107) 00:09:50.438 fused_ordering(108) 00:09:50.438 fused_ordering(109) 00:09:50.438 fused_ordering(110) 00:09:50.438 fused_ordering(111) 00:09:50.438 fused_ordering(112) 00:09:50.438 fused_ordering(113) 00:09:50.438 fused_ordering(114) 00:09:50.438 fused_ordering(115) 00:09:50.438 fused_ordering(116) 00:09:50.438 fused_ordering(117) 00:09:50.438 fused_ordering(118) 00:09:50.438 fused_ordering(119) 00:09:50.438 fused_ordering(120) 00:09:50.438 fused_ordering(121) 00:09:50.438 fused_ordering(122) 00:09:50.438 fused_ordering(123) 00:09:50.438 fused_ordering(124) 00:09:50.438 fused_ordering(125) 00:09:50.438 fused_ordering(126) 00:09:50.438 fused_ordering(127) 00:09:50.438 fused_ordering(128) 00:09:50.438 fused_ordering(129) 00:09:50.438 fused_ordering(130) 00:09:50.438 fused_ordering(131) 00:09:50.438 fused_ordering(132) 00:09:50.438 fused_ordering(133) 00:09:50.438 fused_ordering(134) 00:09:50.438 fused_ordering(135) 00:09:50.438 fused_ordering(136) 00:09:50.438 fused_ordering(137) 00:09:50.438 fused_ordering(138) 00:09:50.438 fused_ordering(139) 00:09:50.438 fused_ordering(140) 00:09:50.438 fused_ordering(141) 00:09:50.438 fused_ordering(142) 00:09:50.438 fused_ordering(143) 00:09:50.438 fused_ordering(144) 00:09:50.438 fused_ordering(145) 00:09:50.438 fused_ordering(146) 00:09:50.438 fused_ordering(147) 00:09:50.438 fused_ordering(148) 00:09:50.438 fused_ordering(149) 00:09:50.438 fused_ordering(150) 00:09:50.438 fused_ordering(151) 00:09:50.438 fused_ordering(152) 00:09:50.438 fused_ordering(153) 00:09:50.438 fused_ordering(154) 00:09:50.438 fused_ordering(155) 00:09:50.438 fused_ordering(156) 00:09:50.438 fused_ordering(157) 00:09:50.438 fused_ordering(158) 00:09:50.438 fused_ordering(159) 00:09:50.438 fused_ordering(160) 00:09:50.438 fused_ordering(161) 00:09:50.438 fused_ordering(162) 00:09:50.438 fused_ordering(163) 00:09:50.438 fused_ordering(164) 00:09:50.438 fused_ordering(165) 00:09:50.438 fused_ordering(166) 00:09:50.438 fused_ordering(167) 00:09:50.438 fused_ordering(168) 00:09:50.438 fused_ordering(169) 00:09:50.438 fused_ordering(170) 00:09:50.438 fused_ordering(171) 00:09:50.438 fused_ordering(172) 00:09:50.438 fused_ordering(173) 00:09:50.438 fused_ordering(174) 00:09:50.438 fused_ordering(175) 00:09:50.438 fused_ordering(176) 00:09:50.438 fused_ordering(177) 00:09:50.438 fused_ordering(178) 00:09:50.438 fused_ordering(179) 00:09:50.438 fused_ordering(180) 00:09:50.438 fused_ordering(181) 00:09:50.438 fused_ordering(182) 00:09:50.438 fused_ordering(183) 00:09:50.438 fused_ordering(184) 00:09:50.438 fused_ordering(185) 00:09:50.438 fused_ordering(186) 00:09:50.438 fused_ordering(187) 00:09:50.438 fused_ordering(188) 00:09:50.438 fused_ordering(189) 00:09:50.438 fused_ordering(190) 00:09:50.438 fused_ordering(191) 00:09:50.438 fused_ordering(192) 00:09:50.438 fused_ordering(193) 00:09:50.438 fused_ordering(194) 00:09:50.438 fused_ordering(195) 00:09:50.438 fused_ordering(196) 00:09:50.438 fused_ordering(197) 00:09:50.438 fused_ordering(198) 00:09:50.438 fused_ordering(199) 00:09:50.438 fused_ordering(200) 00:09:50.438 fused_ordering(201) 00:09:50.438 fused_ordering(202) 00:09:50.438 fused_ordering(203) 00:09:50.438 fused_ordering(204) 00:09:50.438 fused_ordering(205) 00:09:50.438 fused_ordering(206) 00:09:50.438 fused_ordering(207) 00:09:50.438 fused_ordering(208) 00:09:50.438 fused_ordering(209) 00:09:50.438 fused_ordering(210) 00:09:50.438 fused_ordering(211) 00:09:50.438 fused_ordering(212) 00:09:50.438 fused_ordering(213) 00:09:50.438 fused_ordering(214) 00:09:50.438 fused_ordering(215) 00:09:50.438 fused_ordering(216) 00:09:50.438 fused_ordering(217) 00:09:50.438 fused_ordering(218) 00:09:50.438 fused_ordering(219) 00:09:50.438 fused_ordering(220) 00:09:50.438 fused_ordering(221) 00:09:50.438 fused_ordering(222) 00:09:50.438 fused_ordering(223) 00:09:50.438 fused_ordering(224) 00:09:50.438 fused_ordering(225) 00:09:50.438 fused_ordering(226) 00:09:50.438 fused_ordering(227) 00:09:50.438 fused_ordering(228) 00:09:50.438 fused_ordering(229) 00:09:50.438 fused_ordering(230) 00:09:50.438 fused_ordering(231) 00:09:50.438 fused_ordering(232) 00:09:50.438 fused_ordering(233) 00:09:50.438 fused_ordering(234) 00:09:50.438 fused_ordering(235) 00:09:50.438 fused_ordering(236) 00:09:50.438 fused_ordering(237) 00:09:50.438 fused_ordering(238) 00:09:50.438 fused_ordering(239) 00:09:50.438 fused_ordering(240) 00:09:50.438 fused_ordering(241) 00:09:50.439 fused_ordering(242) 00:09:50.439 fused_ordering(243) 00:09:50.439 fused_ordering(244) 00:09:50.439 fused_ordering(245) 00:09:50.439 fused_ordering(246) 00:09:50.439 fused_ordering(247) 00:09:50.439 fused_ordering(248) 00:09:50.439 fused_ordering(249) 00:09:50.439 fused_ordering(250) 00:09:50.439 fused_ordering(251) 00:09:50.439 fused_ordering(252) 00:09:50.439 fused_ordering(253) 00:09:50.439 fused_ordering(254) 00:09:50.439 fused_ordering(255) 00:09:50.439 fused_ordering(256) 00:09:50.439 fused_ordering(257) 00:09:50.439 fused_ordering(258) 00:09:50.439 fused_ordering(259) 00:09:50.439 fused_ordering(260) 00:09:50.439 fused_ordering(261) 00:09:50.439 fused_ordering(262) 00:09:50.439 fused_ordering(263) 00:09:50.439 fused_ordering(264) 00:09:50.439 fused_ordering(265) 00:09:50.439 fused_ordering(266) 00:09:50.439 fused_ordering(267) 00:09:50.439 fused_ordering(268) 00:09:50.439 fused_ordering(269) 00:09:50.439 fused_ordering(270) 00:09:50.439 fused_ordering(271) 00:09:50.439 fused_ordering(272) 00:09:50.439 fused_ordering(273) 00:09:50.439 fused_ordering(274) 00:09:50.439 fused_ordering(275) 00:09:50.439 fused_ordering(276) 00:09:50.439 fused_ordering(277) 00:09:50.439 fused_ordering(278) 00:09:50.439 fused_ordering(279) 00:09:50.439 fused_ordering(280) 00:09:50.439 fused_ordering(281) 00:09:50.439 fused_ordering(282) 00:09:50.439 fused_ordering(283) 00:09:50.439 fused_ordering(284) 00:09:50.439 fused_ordering(285) 00:09:50.439 fused_ordering(286) 00:09:50.439 fused_ordering(287) 00:09:50.439 fused_ordering(288) 00:09:50.439 fused_ordering(289) 00:09:50.439 fused_ordering(290) 00:09:50.439 fused_ordering(291) 00:09:50.439 fused_ordering(292) 00:09:50.439 fused_ordering(293) 00:09:50.439 fused_ordering(294) 00:09:50.439 fused_ordering(295) 00:09:50.439 fused_ordering(296) 00:09:50.439 fused_ordering(297) 00:09:50.439 fused_ordering(298) 00:09:50.439 fused_ordering(299) 00:09:50.439 fused_ordering(300) 00:09:50.439 fused_ordering(301) 00:09:50.439 fused_ordering(302) 00:09:50.439 fused_ordering(303) 00:09:50.439 fused_ordering(304) 00:09:50.439 fused_ordering(305) 00:09:50.439 fused_ordering(306) 00:09:50.439 fused_ordering(307) 00:09:50.439 fused_ordering(308) 00:09:50.439 fused_ordering(309) 00:09:50.439 fused_ordering(310) 00:09:50.439 fused_ordering(311) 00:09:50.439 fused_ordering(312) 00:09:50.439 fused_ordering(313) 00:09:50.439 fused_ordering(314) 00:09:50.439 fused_ordering(315) 00:09:50.439 fused_ordering(316) 00:09:50.439 fused_ordering(317) 00:09:50.439 fused_ordering(318) 00:09:50.439 fused_ordering(319) 00:09:50.439 fused_ordering(320) 00:09:50.439 fused_ordering(321) 00:09:50.439 fused_ordering(322) 00:09:50.439 fused_ordering(323) 00:09:50.439 fused_ordering(324) 00:09:50.439 fused_ordering(325) 00:09:50.439 fused_ordering(326) 00:09:50.439 fused_ordering(327) 00:09:50.439 fused_ordering(328) 00:09:50.439 fused_ordering(329) 00:09:50.439 fused_ordering(330) 00:09:50.439 fused_ordering(331) 00:09:50.439 fused_ordering(332) 00:09:50.439 fused_ordering(333) 00:09:50.439 fused_ordering(334) 00:09:50.439 fused_ordering(335) 00:09:50.439 fused_ordering(336) 00:09:50.439 fused_ordering(337) 00:09:50.439 fused_ordering(338) 00:09:50.439 fused_ordering(339) 00:09:50.439 fused_ordering(340) 00:09:50.439 fused_ordering(341) 00:09:50.439 fused_ordering(342) 00:09:50.439 fused_ordering(343) 00:09:50.439 fused_ordering(344) 00:09:50.439 fused_ordering(345) 00:09:50.439 fused_ordering(346) 00:09:50.439 fused_ordering(347) 00:09:50.439 fused_ordering(348) 00:09:50.439 fused_ordering(349) 00:09:50.439 fused_ordering(350) 00:09:50.439 fused_ordering(351) 00:09:50.439 fused_ordering(352) 00:09:50.439 fused_ordering(353) 00:09:50.439 fused_ordering(354) 00:09:50.439 fused_ordering(355) 00:09:50.439 fused_ordering(356) 00:09:50.439 fused_ordering(357) 00:09:50.439 fused_ordering(358) 00:09:50.439 fused_ordering(359) 00:09:50.439 fused_ordering(360) 00:09:50.439 fused_ordering(361) 00:09:50.439 fused_ordering(362) 00:09:50.439 fused_ordering(363) 00:09:50.439 fused_ordering(364) 00:09:50.439 fused_ordering(365) 00:09:50.439 fused_ordering(366) 00:09:50.439 fused_ordering(367) 00:09:50.439 fused_ordering(368) 00:09:50.439 fused_ordering(369) 00:09:50.439 fused_ordering(370) 00:09:50.439 fused_ordering(371) 00:09:50.439 fused_ordering(372) 00:09:50.439 fused_ordering(373) 00:09:50.439 fused_ordering(374) 00:09:50.439 fused_ordering(375) 00:09:50.439 fused_ordering(376) 00:09:50.439 fused_ordering(377) 00:09:50.439 fused_ordering(378) 00:09:50.439 fused_ordering(379) 00:09:50.439 fused_ordering(380) 00:09:50.439 fused_ordering(381) 00:09:50.439 fused_ordering(382) 00:09:50.439 fused_ordering(383) 00:09:50.439 fused_ordering(384) 00:09:50.439 fused_ordering(385) 00:09:50.439 fused_ordering(386) 00:09:50.439 fused_ordering(387) 00:09:50.439 fused_ordering(388) 00:09:50.439 fused_ordering(389) 00:09:50.439 fused_ordering(390) 00:09:50.439 fused_ordering(391) 00:09:50.439 fused_ordering(392) 00:09:50.439 fused_ordering(393) 00:09:50.439 fused_ordering(394) 00:09:50.439 fused_ordering(395) 00:09:50.439 fused_ordering(396) 00:09:50.439 fused_ordering(397) 00:09:50.439 fused_ordering(398) 00:09:50.439 fused_ordering(399) 00:09:50.439 fused_ordering(400) 00:09:50.439 fused_ordering(401) 00:09:50.439 fused_ordering(402) 00:09:50.439 fused_ordering(403) 00:09:50.439 fused_ordering(404) 00:09:50.439 fused_ordering(405) 00:09:50.439 fused_ordering(406) 00:09:50.439 fused_ordering(407) 00:09:50.439 fused_ordering(408) 00:09:50.439 fused_ordering(409) 00:09:50.439 fused_ordering(410) 00:09:50.696 fused_ordering(411) 00:09:50.696 fused_ordering(412) 00:09:50.697 fused_ordering(413) 00:09:50.697 fused_ordering(414) 00:09:50.697 fused_ordering(415) 00:09:50.697 fused_ordering(416) 00:09:50.697 fused_ordering(417) 00:09:50.697 fused_ordering(418) 00:09:50.697 fused_ordering(419) 00:09:50.697 fused_ordering(420) 00:09:50.697 fused_ordering(421) 00:09:50.697 fused_ordering(422) 00:09:50.697 fused_ordering(423) 00:09:50.697 fused_ordering(424) 00:09:50.697 fused_ordering(425) 00:09:50.697 fused_ordering(426) 00:09:50.697 fused_ordering(427) 00:09:50.697 fused_ordering(428) 00:09:50.697 fused_ordering(429) 00:09:50.697 fused_ordering(430) 00:09:50.697 fused_ordering(431) 00:09:50.697 fused_ordering(432) 00:09:50.697 fused_ordering(433) 00:09:50.697 fused_ordering(434) 00:09:50.697 fused_ordering(435) 00:09:50.697 fused_ordering(436) 00:09:50.697 fused_ordering(437) 00:09:50.697 fused_ordering(438) 00:09:50.697 fused_ordering(439) 00:09:50.697 fused_ordering(440) 00:09:50.697 fused_ordering(441) 00:09:50.697 fused_ordering(442) 00:09:50.697 fused_ordering(443) 00:09:50.697 fused_ordering(444) 00:09:50.697 fused_ordering(445) 00:09:50.697 fused_ordering(446) 00:09:50.697 fused_ordering(447) 00:09:50.697 fused_ordering(448) 00:09:50.697 fused_ordering(449) 00:09:50.697 fused_ordering(450) 00:09:50.697 fused_ordering(451) 00:09:50.697 fused_ordering(452) 00:09:50.697 fused_ordering(453) 00:09:50.697 fused_ordering(454) 00:09:50.697 fused_ordering(455) 00:09:50.697 fused_ordering(456) 00:09:50.697 fused_ordering(457) 00:09:50.697 fused_ordering(458) 00:09:50.697 fused_ordering(459) 00:09:50.697 fused_ordering(460) 00:09:50.697 fused_ordering(461) 00:09:50.697 fused_ordering(462) 00:09:50.697 fused_ordering(463) 00:09:50.697 fused_ordering(464) 00:09:50.697 fused_ordering(465) 00:09:50.697 fused_ordering(466) 00:09:50.697 fused_ordering(467) 00:09:50.697 fused_ordering(468) 00:09:50.697 fused_ordering(469) 00:09:50.697 fused_ordering(470) 00:09:50.697 fused_ordering(471) 00:09:50.697 fused_ordering(472) 00:09:50.697 fused_ordering(473) 00:09:50.697 fused_ordering(474) 00:09:50.697 fused_ordering(475) 00:09:50.697 fused_ordering(476) 00:09:50.697 fused_ordering(477) 00:09:50.697 fused_ordering(478) 00:09:50.697 fused_ordering(479) 00:09:50.697 fused_ordering(480) 00:09:50.697 fused_ordering(481) 00:09:50.697 fused_ordering(482) 00:09:50.697 fused_ordering(483) 00:09:50.697 fused_ordering(484) 00:09:50.697 fused_ordering(485) 00:09:50.697 fused_ordering(486) 00:09:50.697 fused_ordering(487) 00:09:50.697 fused_ordering(488) 00:09:50.697 fused_ordering(489) 00:09:50.697 fused_ordering(490) 00:09:50.697 fused_ordering(491) 00:09:50.697 fused_ordering(492) 00:09:50.697 fused_ordering(493) 00:09:50.697 fused_ordering(494) 00:09:50.697 fused_ordering(495) 00:09:50.697 fused_ordering(496) 00:09:50.697 fused_ordering(497) 00:09:50.697 fused_ordering(498) 00:09:50.697 fused_ordering(499) 00:09:50.697 fused_ordering(500) 00:09:50.697 fused_ordering(501) 00:09:50.697 fused_ordering(502) 00:09:50.697 fused_ordering(503) 00:09:50.697 fused_ordering(504) 00:09:50.697 fused_ordering(505) 00:09:50.697 fused_ordering(506) 00:09:50.697 fused_ordering(507) 00:09:50.697 fused_ordering(508) 00:09:50.697 fused_ordering(509) 00:09:50.697 fused_ordering(510) 00:09:50.697 fused_ordering(511) 00:09:50.697 fused_ordering(512) 00:09:50.697 fused_ordering(513) 00:09:50.697 fused_ordering(514) 00:09:50.697 fused_ordering(515) 00:09:50.697 fused_ordering(516) 00:09:50.697 fused_ordering(517) 00:09:50.697 fused_ordering(518) 00:09:50.697 fused_ordering(519) 00:09:50.697 fused_ordering(520) 00:09:50.697 fused_ordering(521) 00:09:50.697 fused_ordering(522) 00:09:50.697 fused_ordering(523) 00:09:50.697 fused_ordering(524) 00:09:50.697 fused_ordering(525) 00:09:50.697 fused_ordering(526) 00:09:50.697 fused_ordering(527) 00:09:50.697 fused_ordering(528) 00:09:50.697 fused_ordering(529) 00:09:50.697 fused_ordering(530) 00:09:50.697 fused_ordering(531) 00:09:50.697 fused_ordering(532) 00:09:50.697 fused_ordering(533) 00:09:50.697 fused_ordering(534) 00:09:50.697 fused_ordering(535) 00:09:50.697 fused_ordering(536) 00:09:50.697 fused_ordering(537) 00:09:50.697 fused_ordering(538) 00:09:50.697 fused_ordering(539) 00:09:50.697 fused_ordering(540) 00:09:50.697 fused_ordering(541) 00:09:50.697 fused_ordering(542) 00:09:50.697 fused_ordering(543) 00:09:50.697 fused_ordering(544) 00:09:50.697 fused_ordering(545) 00:09:50.697 fused_ordering(546) 00:09:50.697 fused_ordering(547) 00:09:50.697 fused_ordering(548) 00:09:50.697 fused_ordering(549) 00:09:50.697 fused_ordering(550) 00:09:50.697 fused_ordering(551) 00:09:50.697 fused_ordering(552) 00:09:50.697 fused_ordering(553) 00:09:50.697 fused_ordering(554) 00:09:50.697 fused_ordering(555) 00:09:50.697 fused_ordering(556) 00:09:50.697 fused_ordering(557) 00:09:50.697 fused_ordering(558) 00:09:50.697 fused_ordering(559) 00:09:50.697 fused_ordering(560) 00:09:50.697 fused_ordering(561) 00:09:50.697 fused_ordering(562) 00:09:50.697 fused_ordering(563) 00:09:50.697 fused_ordering(564) 00:09:50.697 fused_ordering(565) 00:09:50.697 fused_ordering(566) 00:09:50.697 fused_ordering(567) 00:09:50.697 fused_ordering(568) 00:09:50.697 fused_ordering(569) 00:09:50.697 fused_ordering(570) 00:09:50.697 fused_ordering(571) 00:09:50.697 fused_ordering(572) 00:09:50.697 fused_ordering(573) 00:09:50.697 fused_ordering(574) 00:09:50.697 fused_ordering(575) 00:09:50.697 fused_ordering(576) 00:09:50.697 fused_ordering(577) 00:09:50.697 fused_ordering(578) 00:09:50.697 fused_ordering(579) 00:09:50.697 fused_ordering(580) 00:09:50.697 fused_ordering(581) 00:09:50.697 fused_ordering(582) 00:09:50.697 fused_ordering(583) 00:09:50.697 fused_ordering(584) 00:09:50.697 fused_ordering(585) 00:09:50.697 fused_ordering(586) 00:09:50.697 fused_ordering(587) 00:09:50.697 fused_ordering(588) 00:09:50.697 fused_ordering(589) 00:09:50.697 fused_ordering(590) 00:09:50.697 fused_ordering(591) 00:09:50.697 fused_ordering(592) 00:09:50.697 fused_ordering(593) 00:09:50.697 fused_ordering(594) 00:09:50.697 fused_ordering(595) 00:09:50.697 fused_ordering(596) 00:09:50.697 fused_ordering(597) 00:09:50.697 fused_ordering(598) 00:09:50.697 fused_ordering(599) 00:09:50.697 fused_ordering(600) 00:09:50.697 fused_ordering(601) 00:09:50.697 fused_ordering(602) 00:09:50.697 fused_ordering(603) 00:09:50.697 fused_ordering(604) 00:09:50.697 fused_ordering(605) 00:09:50.697 fused_ordering(606) 00:09:50.697 fused_ordering(607) 00:09:50.697 fused_ordering(608) 00:09:50.697 fused_ordering(609) 00:09:50.697 fused_ordering(610) 00:09:50.697 fused_ordering(611) 00:09:50.697 fused_ordering(612) 00:09:50.697 fused_ordering(613) 00:09:50.697 fused_ordering(614) 00:09:50.697 fused_ordering(615) 00:09:50.954 fused_ordering(616) 00:09:50.954 fused_ordering(617) 00:09:50.954 fused_ordering(618) 00:09:50.955 fused_ordering(619) 00:09:50.955 fused_ordering(620) 00:09:50.955 fused_ordering(621) 00:09:50.955 fused_ordering(622) 00:09:50.955 fused_ordering(623) 00:09:50.955 fused_ordering(624) 00:09:50.955 fused_ordering(625) 00:09:50.955 fused_ordering(626) 00:09:50.955 fused_ordering(627) 00:09:50.955 fused_ordering(628) 00:09:50.955 fused_ordering(629) 00:09:50.955 fused_ordering(630) 00:09:50.955 fused_ordering(631) 00:09:50.955 fused_ordering(632) 00:09:50.955 fused_ordering(633) 00:09:50.955 fused_ordering(634) 00:09:50.955 fused_ordering(635) 00:09:50.955 fused_ordering(636) 00:09:50.955 fused_ordering(637) 00:09:50.955 fused_ordering(638) 00:09:50.955 fused_ordering(639) 00:09:50.955 fused_ordering(640) 00:09:50.955 fused_ordering(641) 00:09:50.955 fused_ordering(642) 00:09:50.955 fused_ordering(643) 00:09:50.955 fused_ordering(644) 00:09:50.955 fused_ordering(645) 00:09:50.955 fused_ordering(646) 00:09:50.955 fused_ordering(647) 00:09:50.955 fused_ordering(648) 00:09:50.955 fused_ordering(649) 00:09:50.955 fused_ordering(650) 00:09:50.955 fused_ordering(651) 00:09:50.955 fused_ordering(652) 00:09:50.955 fused_ordering(653) 00:09:50.955 fused_ordering(654) 00:09:50.955 fused_ordering(655) 00:09:50.955 fused_ordering(656) 00:09:50.955 fused_ordering(657) 00:09:50.955 fused_ordering(658) 00:09:50.955 fused_ordering(659) 00:09:50.955 fused_ordering(660) 00:09:50.955 fused_ordering(661) 00:09:50.955 fused_ordering(662) 00:09:50.955 fused_ordering(663) 00:09:50.955 fused_ordering(664) 00:09:50.955 fused_ordering(665) 00:09:50.955 fused_ordering(666) 00:09:50.955 fused_ordering(667) 00:09:50.955 fused_ordering(668) 00:09:50.955 fused_ordering(669) 00:09:50.955 fused_ordering(670) 00:09:50.955 fused_ordering(671) 00:09:50.955 fused_ordering(672) 00:09:50.955 fused_ordering(673) 00:09:50.955 fused_ordering(674) 00:09:50.955 fused_ordering(675) 00:09:50.955 fused_ordering(676) 00:09:50.955 fused_ordering(677) 00:09:50.955 fused_ordering(678) 00:09:50.955 fused_ordering(679) 00:09:50.955 fused_ordering(680) 00:09:50.955 fused_ordering(681) 00:09:50.955 fused_ordering(682) 00:09:50.955 fused_ordering(683) 00:09:50.955 fused_ordering(684) 00:09:50.955 fused_ordering(685) 00:09:50.955 fused_ordering(686) 00:09:50.955 fused_ordering(687) 00:09:50.955 fused_ordering(688) 00:09:50.955 fused_ordering(689) 00:09:50.955 fused_ordering(690) 00:09:50.955 fused_ordering(691) 00:09:50.955 fused_ordering(692) 00:09:50.955 fused_ordering(693) 00:09:50.955 fused_ordering(694) 00:09:50.955 fused_ordering(695) 00:09:50.955 fused_ordering(696) 00:09:50.955 fused_ordering(697) 00:09:50.955 fused_ordering(698) 00:09:50.955 fused_ordering(699) 00:09:50.955 fused_ordering(700) 00:09:50.955 fused_ordering(701) 00:09:50.955 fused_ordering(702) 00:09:50.955 fused_ordering(703) 00:09:50.955 fused_ordering(704) 00:09:50.955 fused_ordering(705) 00:09:50.955 fused_ordering(706) 00:09:50.955 fused_ordering(707) 00:09:50.955 fused_ordering(708) 00:09:50.955 fused_ordering(709) 00:09:50.955 fused_ordering(710) 00:09:50.955 fused_ordering(711) 00:09:50.955 fused_ordering(712) 00:09:50.955 fused_ordering(713) 00:09:50.955 fused_ordering(714) 00:09:50.955 fused_ordering(715) 00:09:50.955 fused_ordering(716) 00:09:50.955 fused_ordering(717) 00:09:50.955 fused_ordering(718) 00:09:50.955 fused_ordering(719) 00:09:50.955 fused_ordering(720) 00:09:50.955 fused_ordering(721) 00:09:50.955 fused_ordering(722) 00:09:50.955 fused_ordering(723) 00:09:50.955 fused_ordering(724) 00:09:50.955 fused_ordering(725) 00:09:50.955 fused_ordering(726) 00:09:50.955 fused_ordering(727) 00:09:50.955 fused_ordering(728) 00:09:50.955 fused_ordering(729) 00:09:50.955 fused_ordering(730) 00:09:50.955 fused_ordering(731) 00:09:50.955 fused_ordering(732) 00:09:50.955 fused_ordering(733) 00:09:50.955 fused_ordering(734) 00:09:50.955 fused_ordering(735) 00:09:50.955 fused_ordering(736) 00:09:50.955 fused_ordering(737) 00:09:50.955 fused_ordering(738) 00:09:50.955 fused_ordering(739) 00:09:50.955 fused_ordering(740) 00:09:50.955 fused_ordering(741) 00:09:50.955 fused_ordering(742) 00:09:50.955 fused_ordering(743) 00:09:50.955 fused_ordering(744) 00:09:50.955 fused_ordering(745) 00:09:50.955 fused_ordering(746) 00:09:50.955 fused_ordering(747) 00:09:50.955 fused_ordering(748) 00:09:50.955 fused_ordering(749) 00:09:50.955 fused_ordering(750) 00:09:50.955 fused_ordering(751) 00:09:50.955 fused_ordering(752) 00:09:50.955 fused_ordering(753) 00:09:50.955 fused_ordering(754) 00:09:50.955 fused_ordering(755) 00:09:50.955 fused_ordering(756) 00:09:50.955 fused_ordering(757) 00:09:50.955 fused_ordering(758) 00:09:50.955 fused_ordering(759) 00:09:50.955 fused_ordering(760) 00:09:50.955 fused_ordering(761) 00:09:50.955 fused_ordering(762) 00:09:50.955 fused_ordering(763) 00:09:50.955 fused_ordering(764) 00:09:50.955 fused_ordering(765) 00:09:50.955 fused_ordering(766) 00:09:50.955 fused_ordering(767) 00:09:50.955 fused_ordering(768) 00:09:50.955 fused_ordering(769) 00:09:50.955 fused_ordering(770) 00:09:50.955 fused_ordering(771) 00:09:50.955 fused_ordering(772) 00:09:50.955 fused_ordering(773) 00:09:50.955 fused_ordering(774) 00:09:50.955 fused_ordering(775) 00:09:50.955 fused_ordering(776) 00:09:50.955 fused_ordering(777) 00:09:50.955 fused_ordering(778) 00:09:50.955 fused_ordering(779) 00:09:50.955 fused_ordering(780) 00:09:50.955 fused_ordering(781) 00:09:50.955 fused_ordering(782) 00:09:50.955 fused_ordering(783) 00:09:50.955 fused_ordering(784) 00:09:50.955 fused_ordering(785) 00:09:50.955 fused_ordering(786) 00:09:50.955 fused_ordering(787) 00:09:50.955 fused_ordering(788) 00:09:50.955 fused_ordering(789) 00:09:50.955 fused_ordering(790) 00:09:50.955 fused_ordering(791) 00:09:50.955 fused_ordering(792) 00:09:50.955 fused_ordering(793) 00:09:50.955 fused_ordering(794) 00:09:50.955 fused_ordering(795) 00:09:50.955 fused_ordering(796) 00:09:50.955 fused_ordering(797) 00:09:50.955 fused_ordering(798) 00:09:50.955 fused_ordering(799) 00:09:50.955 fused_ordering(800) 00:09:50.955 fused_ordering(801) 00:09:50.955 fused_ordering(802) 00:09:50.955 fused_ordering(803) 00:09:50.955 fused_ordering(804) 00:09:50.955 fused_ordering(805) 00:09:50.955 fused_ordering(806) 00:09:50.955 fused_ordering(807) 00:09:50.955 fused_ordering(808) 00:09:50.955 fused_ordering(809) 00:09:50.955 fused_ordering(810) 00:09:50.955 fused_ordering(811) 00:09:50.955 fused_ordering(812) 00:09:50.955 fused_ordering(813) 00:09:50.955 fused_ordering(814) 00:09:50.955 fused_ordering(815) 00:09:50.955 fused_ordering(816) 00:09:50.955 fused_ordering(817) 00:09:50.955 fused_ordering(818) 00:09:50.955 fused_ordering(819) 00:09:50.955 fused_ordering(820) 00:09:51.213 fused_ordering(821) 00:09:51.213 fused_ordering(822) 00:09:51.213 fused_ordering(823) 00:09:51.213 fused_ordering(824) 00:09:51.213 fused_ordering(825) 00:09:51.213 fused_ordering(826) 00:09:51.213 fused_ordering(827) 00:09:51.213 fused_ordering(828) 00:09:51.213 fused_ordering(829) 00:09:51.213 fused_ordering(830) 00:09:51.213 fused_ordering(831) 00:09:51.213 fused_ordering(832) 00:09:51.213 fused_ordering(833) 00:09:51.213 fused_ordering(834) 00:09:51.213 fused_ordering(835) 00:09:51.213 fused_ordering(836) 00:09:51.213 fused_ordering(837) 00:09:51.213 fused_ordering(838) 00:09:51.213 fused_ordering(839) 00:09:51.213 fused_ordering(840) 00:09:51.213 fused_ordering(841) 00:09:51.213 fused_ordering(842) 00:09:51.213 fused_ordering(843) 00:09:51.213 fused_ordering(844) 00:09:51.213 fused_ordering(845) 00:09:51.213 fused_ordering(846) 00:09:51.213 fused_ordering(847) 00:09:51.213 fused_ordering(848) 00:09:51.213 fused_ordering(849) 00:09:51.213 fused_ordering(850) 00:09:51.213 fused_ordering(851) 00:09:51.213 fused_ordering(852) 00:09:51.213 fused_ordering(853) 00:09:51.213 fused_ordering(854) 00:09:51.213 fused_ordering(855) 00:09:51.213 fused_ordering(856) 00:09:51.213 fused_ordering(857) 00:09:51.213 fused_ordering(858) 00:09:51.213 fused_ordering(859) 00:09:51.213 fused_ordering(860) 00:09:51.213 fused_ordering(861) 00:09:51.213 fused_ordering(862) 00:09:51.213 fused_ordering(863) 00:09:51.213 fused_ordering(864) 00:09:51.213 fused_ordering(865) 00:09:51.213 fused_ordering(866) 00:09:51.213 fused_ordering(867) 00:09:51.213 fused_ordering(868) 00:09:51.213 fused_ordering(869) 00:09:51.213 fused_ordering(870) 00:09:51.213 fused_ordering(871) 00:09:51.213 fused_ordering(872) 00:09:51.213 fused_ordering(873) 00:09:51.213 fused_ordering(874) 00:09:51.213 fused_ordering(875) 00:09:51.213 fused_ordering(876) 00:09:51.213 fused_ordering(877) 00:09:51.213 fused_ordering(878) 00:09:51.213 fused_ordering(879) 00:09:51.213 fused_ordering(880) 00:09:51.213 fused_ordering(881) 00:09:51.213 fused_ordering(882) 00:09:51.213 fused_ordering(883) 00:09:51.213 fused_ordering(884) 00:09:51.213 fused_ordering(885) 00:09:51.213 fused_ordering(886) 00:09:51.213 fused_ordering(887) 00:09:51.213 fused_ordering(888) 00:09:51.213 fused_ordering(889) 00:09:51.213 fused_ordering(890) 00:09:51.213 fused_ordering(891) 00:09:51.213 fused_ordering(892) 00:09:51.213 fused_ordering(893) 00:09:51.213 fused_ordering(894) 00:09:51.213 fused_ordering(895) 00:09:51.213 fused_ordering(896) 00:09:51.213 fused_ordering(897) 00:09:51.213 fused_ordering(898) 00:09:51.213 fused_ordering(899) 00:09:51.213 fused_ordering(900) 00:09:51.213 fused_ordering(901) 00:09:51.213 fused_ordering(902) 00:09:51.213 fused_ordering(903) 00:09:51.213 fused_ordering(904) 00:09:51.213 fused_ordering(905) 00:09:51.213 fused_ordering(906) 00:09:51.213 fused_ordering(907) 00:09:51.213 fused_ordering(908) 00:09:51.213 fused_ordering(909) 00:09:51.213 fused_ordering(910) 00:09:51.213 fused_ordering(911) 00:09:51.213 fused_ordering(912) 00:09:51.213 fused_ordering(913) 00:09:51.213 fused_ordering(914) 00:09:51.213 fused_ordering(915) 00:09:51.213 fused_ordering(916) 00:09:51.213 fused_ordering(917) 00:09:51.213 fused_ordering(918) 00:09:51.213 fused_ordering(919) 00:09:51.213 fused_ordering(920) 00:09:51.213 fused_ordering(921) 00:09:51.213 fused_ordering(922) 00:09:51.213 fused_ordering(923) 00:09:51.213 fused_ordering(924) 00:09:51.213 fused_ordering(925) 00:09:51.213 fused_ordering(926) 00:09:51.213 fused_ordering(927) 00:09:51.213 fused_ordering(928) 00:09:51.213 fused_ordering(929) 00:09:51.213 fused_ordering(930) 00:09:51.213 fused_ordering(931) 00:09:51.213 fused_ordering(932) 00:09:51.213 fused_ordering(933) 00:09:51.213 fused_ordering(934) 00:09:51.213 fused_ordering(935) 00:09:51.213 fused_ordering(936) 00:09:51.213 fused_ordering(937) 00:09:51.213 fused_ordering(938) 00:09:51.213 fused_ordering(939) 00:09:51.213 fused_ordering(940) 00:09:51.213 fused_ordering(941) 00:09:51.213 fused_ordering(942) 00:09:51.213 fused_ordering(943) 00:09:51.213 fused_ordering(944) 00:09:51.213 fused_ordering(945) 00:09:51.213 fused_ordering(946) 00:09:51.213 fused_ordering(947) 00:09:51.213 fused_ordering(948) 00:09:51.213 fused_ordering(949) 00:09:51.213 fused_ordering(950) 00:09:51.213 fused_ordering(951) 00:09:51.213 fused_ordering(952) 00:09:51.213 fused_ordering(953) 00:09:51.213 fused_ordering(954) 00:09:51.213 fused_ordering(955) 00:09:51.213 fused_ordering(956) 00:09:51.213 fused_ordering(957) 00:09:51.213 fused_ordering(958) 00:09:51.214 fused_ordering(959) 00:09:51.214 fused_ordering(960) 00:09:51.214 fused_ordering(961) 00:09:51.214 fused_ordering(962) 00:09:51.214 fused_ordering(963) 00:09:51.214 fused_ordering(964) 00:09:51.214 fused_ordering(965) 00:09:51.214 fused_ordering(966) 00:09:51.214 fused_ordering(967) 00:09:51.214 fused_ordering(968) 00:09:51.214 fused_ordering(969) 00:09:51.214 fused_ordering(970) 00:09:51.214 fused_ordering(971) 00:09:51.214 fused_ordering(972) 00:09:51.214 fused_ordering(973) 00:09:51.214 fused_ordering(974) 00:09:51.214 fused_ordering(975) 00:09:51.214 fused_ordering(976) 00:09:51.214 fused_ordering(977) 00:09:51.214 fused_ordering(978) 00:09:51.214 fused_ordering(979) 00:09:51.214 fused_ordering(980) 00:09:51.214 fused_ordering(981) 00:09:51.214 fused_ordering(982) 00:09:51.214 fused_ordering(983) 00:09:51.214 fused_ordering(984) 00:09:51.214 fused_ordering(985) 00:09:51.214 fused_ordering(986) 00:09:51.214 fused_ordering(987) 00:09:51.214 fused_ordering(988) 00:09:51.214 fused_ordering(989) 00:09:51.214 fused_ordering(990) 00:09:51.214 fused_ordering(991) 00:09:51.214 fused_ordering(992) 00:09:51.214 fused_ordering(993) 00:09:51.214 fused_ordering(994) 00:09:51.214 fused_ordering(995) 00:09:51.214 fused_ordering(996) 00:09:51.214 fused_ordering(997) 00:09:51.214 fused_ordering(998) 00:09:51.214 fused_ordering(999) 00:09:51.214 fused_ordering(1000) 00:09:51.214 fused_ordering(1001) 00:09:51.214 fused_ordering(1002) 00:09:51.214 fused_ordering(1003) 00:09:51.214 fused_ordering(1004) 00:09:51.214 fused_ordering(1005) 00:09:51.214 fused_ordering(1006) 00:09:51.214 fused_ordering(1007) 00:09:51.214 fused_ordering(1008) 00:09:51.214 fused_ordering(1009) 00:09:51.214 fused_ordering(1010) 00:09:51.214 fused_ordering(1011) 00:09:51.214 fused_ordering(1012) 00:09:51.214 fused_ordering(1013) 00:09:51.214 fused_ordering(1014) 00:09:51.214 fused_ordering(1015) 00:09:51.214 fused_ordering(1016) 00:09:51.214 fused_ordering(1017) 00:09:51.214 fused_ordering(1018) 00:09:51.214 fused_ordering(1019) 00:09:51.214 fused_ordering(1020) 00:09:51.214 fused_ordering(1021) 00:09:51.214 fused_ordering(1022) 00:09:51.214 fused_ordering(1023) 00:09:51.214 13:38:53 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:51.214 13:38:53 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:51.214 13:38:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:51.214 13:38:53 -- nvmf/common.sh@117 -- # sync 00:09:51.214 13:38:53 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:51.214 13:38:53 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:51.214 13:38:53 -- nvmf/common.sh@120 -- # set +e 00:09:51.214 13:38:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.214 13:38:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:51.214 rmmod nvme_rdma 00:09:51.214 rmmod nvme_fabrics 00:09:51.214 13:38:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.214 13:38:53 -- nvmf/common.sh@124 -- # set -e 00:09:51.214 13:38:53 -- nvmf/common.sh@125 -- # return 0 00:09:51.214 13:38:53 -- nvmf/common.sh@478 -- # '[' -n 1079111 ']' 00:09:51.214 13:38:53 -- nvmf/common.sh@479 -- # killprocess 1079111 00:09:51.214 13:38:53 -- common/autotest_common.sh@936 -- # '[' -z 1079111 ']' 00:09:51.214 13:38:53 -- common/autotest_common.sh@940 -- # kill -0 1079111 00:09:51.214 13:38:53 -- common/autotest_common.sh@941 -- # uname 00:09:51.214 13:38:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:51.214 13:38:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1079111 00:09:51.214 13:38:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:51.214 13:38:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:51.214 13:38:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1079111' 00:09:51.214 killing process with pid 1079111 00:09:51.214 13:38:53 -- common/autotest_common.sh@955 -- # kill 1079111 00:09:51.214 13:38:53 -- common/autotest_common.sh@960 -- # wait 1079111 00:09:51.472 13:38:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:51.472 13:38:54 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:51.472 00:09:51.472 real 0m4.892s 00:09:51.472 user 0m3.709s 00:09:51.472 sys 0m2.574s 00:09:51.472 13:38:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.472 13:38:54 -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 ************************************ 00:09:51.472 END TEST nvmf_fused_ordering 00:09:51.472 ************************************ 00:09:51.472 13:38:54 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:51.472 13:38:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:51.472 13:38:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.472 13:38:54 -- common/autotest_common.sh@10 -- # set +x 00:09:51.729 ************************************ 00:09:51.729 START TEST nvmf_delete_subsystem 00:09:51.729 ************************************ 00:09:51.729 13:38:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:51.729 * Looking for test storage... 00:09:51.729 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:51.729 13:38:54 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.729 13:38:54 -- nvmf/common.sh@7 -- # uname -s 00:09:51.729 13:38:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.729 13:38:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.729 13:38:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.729 13:38:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.730 13:38:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.730 13:38:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.730 13:38:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.730 13:38:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.730 13:38:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.730 13:38:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.730 13:38:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:09:51.730 13:38:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:09:51.730 13:38:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.730 13:38:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.730 13:38:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.730 13:38:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.730 13:38:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:51.730 13:38:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.730 13:38:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.730 13:38:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.730 13:38:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.730 13:38:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.730 13:38:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.730 13:38:54 -- paths/export.sh@5 -- # export PATH 00:09:51.730 13:38:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.730 13:38:54 -- nvmf/common.sh@47 -- # : 0 00:09:51.730 13:38:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.730 13:38:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.730 13:38:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.730 13:38:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.730 13:38:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.730 13:38:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.730 13:38:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.730 13:38:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.730 13:38:54 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:51.730 13:38:54 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:51.730 13:38:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.730 13:38:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:51.730 13:38:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:51.730 13:38:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:51.730 13:38:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.730 13:38:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.730 13:38:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.730 13:38:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:51.730 13:38:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:51.730 13:38:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.730 13:38:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.020 13:38:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:55.020 13:38:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.020 13:38:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.020 13:38:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.020 13:38:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.020 13:38:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.020 13:38:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.020 13:38:57 -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.020 13:38:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.020 13:38:57 -- nvmf/common.sh@296 -- # e810=() 00:09:55.020 13:38:57 -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.020 13:38:57 -- nvmf/common.sh@297 -- # x722=() 00:09:55.020 13:38:57 -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.020 13:38:57 -- nvmf/common.sh@298 -- # mlx=() 00:09:55.020 13:38:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.020 13:38:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.020 13:38:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.020 13:38:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:55.020 13:38:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:55.020 13:38:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:55.020 13:38:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.020 13:38:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.020 13:38:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:09:55.020 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:09:55.020 13:38:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.020 13:38:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.020 13:38:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:09:55.020 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:09:55.020 13:38:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:55.020 13:38:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.020 13:38:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:55.020 13:38:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.020 13:38:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.020 13:38:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.020 13:38:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.020 13:38:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:09:55.020 Found net devices under 0000:81:00.0: mlx_0_0 00:09:55.020 13:38:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.020 13:38:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.020 13:38:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.020 13:38:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.020 13:38:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.020 13:38:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:09:55.021 Found net devices under 0000:81:00.1: mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.021 13:38:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:55.021 13:38:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:55.021 13:38:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:55.021 13:38:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:55.021 13:38:57 -- nvmf/common.sh@58 -- # uname 00:09:55.021 13:38:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:55.021 13:38:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:55.021 13:38:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:55.021 13:38:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:55.021 13:38:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:55.021 13:38:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:55.021 13:38:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:55.021 13:38:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:55.021 13:38:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:55.021 13:38:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:55.021 13:38:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:55.021 13:38:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.021 13:38:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:55.021 13:38:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:55.021 13:38:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.021 13:38:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:55.021 13:38:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@105 -- # continue 2 00:09:55.021 13:38:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@105 -- # continue 2 00:09:55.021 13:38:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:55.021 13:38:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.021 13:38:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:55.021 13:38:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:55.021 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.021 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:09:55.021 altname enp129s0f0np0 00:09:55.021 inet 192.168.100.8/24 scope global mlx_0_0 00:09:55.021 valid_lft forever preferred_lft forever 00:09:55.021 13:38:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:55.021 13:38:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.021 13:38:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:55.021 13:38:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:55.021 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:55.021 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:09:55.021 altname enp129s0f1np1 00:09:55.021 inet 192.168.100.9/24 scope global mlx_0_1 00:09:55.021 valid_lft forever preferred_lft forever 00:09:55.021 13:38:57 -- nvmf/common.sh@411 -- # return 0 00:09:55.021 13:38:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:55.021 13:38:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:55.021 13:38:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:55.021 13:38:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:55.021 13:38:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:55.021 13:38:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:55.021 13:38:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:55.021 13:38:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:55.021 13:38:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:55.021 13:38:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@105 -- # continue 2 00:09:55.021 13:38:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:55.021 13:38:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:55.021 13:38:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@105 -- # continue 2 00:09:55.021 13:38:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:55.021 13:38:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.021 13:38:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:55.021 13:38:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:55.021 13:38:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:55.021 13:38:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:55.021 192.168.100.9' 00:09:55.021 13:38:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:55.021 192.168.100.9' 00:09:55.021 13:38:57 -- nvmf/common.sh@446 -- # head -n 1 00:09:55.021 13:38:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:55.021 13:38:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:55.021 192.168.100.9' 00:09:55.021 13:38:57 -- nvmf/common.sh@447 -- # tail -n +2 00:09:55.021 13:38:57 -- nvmf/common.sh@447 -- # head -n 1 00:09:55.021 13:38:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:55.021 13:38:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:55.021 13:38:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:55.021 13:38:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:55.021 13:38:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:55.021 13:38:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:55.021 13:38:57 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:55.021 13:38:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:55.021 13:38:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:55.021 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.021 13:38:57 -- nvmf/common.sh@470 -- # nvmfpid=1081342 00:09:55.021 13:38:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:55.021 13:38:57 -- nvmf/common.sh@471 -- # waitforlisten 1081342 00:09:55.021 13:38:57 -- common/autotest_common.sh@817 -- # '[' -z 1081342 ']' 00:09:55.021 13:38:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.021 13:38:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:55.021 13:38:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.021 13:38:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:55.021 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.021 [2024-04-18 13:38:57.379336] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:55.021 [2024-04-18 13:38:57.379444] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.021 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.021 [2024-04-18 13:38:57.466618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.021 [2024-04-18 13:38:57.587857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.021 [2024-04-18 13:38:57.587927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.021 [2024-04-18 13:38:57.587952] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.021 [2024-04-18 13:38:57.587967] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.021 [2024-04-18 13:38:57.587980] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.021 [2024-04-18 13:38:57.588053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.021 [2024-04-18 13:38:57.588060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.021 13:38:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.021 13:38:57 -- common/autotest_common.sh@850 -- # return 0 00:09:55.021 13:38:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:55.021 13:38:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:55.021 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.021 13:38:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.021 13:38:57 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:55.021 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.021 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.021 [2024-04-18 13:38:57.771315] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x155aa30/0x155ef20) succeed. 00:09:55.021 [2024-04-18 13:38:57.783451] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x155bf30/0x15a05b0) succeed. 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:55.280 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.280 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:55.280 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.280 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 [2024-04-18 13:38:57.895129] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:55.280 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.280 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 NULL1 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:55.280 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.280 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 Delay0 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.280 13:38:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.280 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 13:38:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@28 -- # perf_pid=1081383 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:55.280 13:38:57 -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:55.280 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.280 [2024-04-18 13:38:58.006603] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:57.178 13:38:59 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.178 13:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.178 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 NVMe io qpair process completion error 00:09:58.550 13:39:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.550 13:39:01 -- target/delete_subsystem.sh@34 -- # delay=0 00:09:58.550 13:39:01 -- target/delete_subsystem.sh@35 -- # kill -0 1081383 00:09:58.550 13:39:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:58.808 13:39:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:58.808 13:39:01 -- target/delete_subsystem.sh@35 -- # kill -0 1081383 00:09:58.808 13:39:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 starting I/O failed: -6 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Write completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.372 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 starting I/O failed: -6 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Write completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.373 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Write completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 Read completed with error (sct=0, sc=8) 00:09:59.374 13:39:02 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:59.374 13:39:02 -- target/delete_subsystem.sh@35 -- # kill -0 1081383 00:09:59.374 13:39:02 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:59.374 [2024-04-18 13:39:02.126426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:59.374 [2024-04-18 13:39:02.126499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:59.374 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:59.374 Initializing NVMe Controllers 00:09:59.374 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.374 Controller IO queue size 128, less than required. 00:09:59.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.374 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:59.374 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:59.374 Initialization complete. Launching workers. 00:09:59.374 ======================================================== 00:09:59.374 Latency(us) 00:09:59.374 Device Information : IOPS MiB/s Average min max 00:09:59.374 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.54 0.04 1595041.67 1000206.39 2975304.58 00:09:59.374 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.54 0.04 1593270.18 1000165.09 2973990.44 00:09:59.374 ======================================================== 00:09:59.374 Total : 161.07 0.08 1594155.93 1000165.09 2975304.58 00:09:59.374 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@35 -- # kill -0 1081383 00:09:59.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1081383) - No such process 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@45 -- # NOT wait 1081383 00:09:59.940 13:39:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:59.940 13:39:02 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1081383 00:09:59.940 13:39:02 -- common/autotest_common.sh@626 -- # local arg=wait 00:09:59.940 13:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:59.940 13:39:02 -- common/autotest_common.sh@630 -- # type -t wait 00:09:59.940 13:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:59.940 13:39:02 -- common/autotest_common.sh@641 -- # wait 1081383 00:09:59.940 13:39:02 -- common/autotest_common.sh@641 -- # es=1 00:09:59.940 13:39:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:59.940 13:39:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:59.940 13:39:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.940 13:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.940 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.940 13:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:59.940 13:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.940 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.940 [2024-04-18 13:39:02.628759] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:59.940 13:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.940 13:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.940 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.940 13:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@54 -- # perf_pid=1082130 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@56 -- # delay=0 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:59.940 13:39:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:59.940 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.940 [2024-04-18 13:39:02.741882] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:00.514 13:39:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:00.515 13:39:03 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:00.515 13:39:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.110 13:39:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.110 13:39:03 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:01.110 13:39:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.384 13:39:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.384 13:39:04 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:01.384 13:39:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.950 13:39:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.950 13:39:04 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:01.950 13:39:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.514 13:39:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:02.514 13:39:05 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:02.514 13:39:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.079 13:39:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.079 13:39:05 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:03.079 13:39:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.644 13:39:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.644 13:39:06 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:03.644 13:39:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.901 13:39:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.901 13:39:06 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:03.901 13:39:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.469 13:39:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.469 13:39:07 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:04.469 13:39:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.034 13:39:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.034 13:39:07 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:05.034 13:39:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.599 13:39:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.599 13:39:08 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:05.599 13:39:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.164 13:39:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.164 13:39:08 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:06.164 13:39:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.422 13:39:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.423 13:39:09 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:06.423 13:39:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.988 13:39:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.988 13:39:09 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:06.988 13:39:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.246 Initializing NVMe Controllers 00:10:07.246 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.246 Controller IO queue size 128, less than required. 00:10:07.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.246 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:07.246 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:07.246 Initialization complete. Launching workers. 00:10:07.246 ======================================================== 00:10:07.246 Latency(us) 00:10:07.246 Device Information : IOPS MiB/s Average min max 00:10:07.246 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001684.76 1000081.05 1005207.03 00:10:07.247 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003111.55 1000261.03 1007696.26 00:10:07.247 ======================================================== 00:10:07.247 Total : 256.00 0.12 1002398.15 1000081.05 1007696.26 00:10:07.247 00:10:07.504 13:39:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.504 13:39:10 -- target/delete_subsystem.sh@57 -- # kill -0 1082130 00:10:07.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1082130) - No such process 00:10:07.504 13:39:10 -- target/delete_subsystem.sh@67 -- # wait 1082130 00:10:07.504 13:39:10 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:07.504 13:39:10 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:07.504 13:39:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:07.504 13:39:10 -- nvmf/common.sh@117 -- # sync 00:10:07.504 13:39:10 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:07.504 13:39:10 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:07.504 13:39:10 -- nvmf/common.sh@120 -- # set +e 00:10:07.504 13:39:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.504 13:39:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:07.504 rmmod nvme_rdma 00:10:07.504 rmmod nvme_fabrics 00:10:07.504 13:39:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.504 13:39:10 -- nvmf/common.sh@124 -- # set -e 00:10:07.504 13:39:10 -- nvmf/common.sh@125 -- # return 0 00:10:07.504 13:39:10 -- nvmf/common.sh@478 -- # '[' -n 1081342 ']' 00:10:07.505 13:39:10 -- nvmf/common.sh@479 -- # killprocess 1081342 00:10:07.505 13:39:10 -- common/autotest_common.sh@936 -- # '[' -z 1081342 ']' 00:10:07.505 13:39:10 -- common/autotest_common.sh@940 -- # kill -0 1081342 00:10:07.505 13:39:10 -- common/autotest_common.sh@941 -- # uname 00:10:07.505 13:39:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.505 13:39:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1081342 00:10:07.505 13:39:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.505 13:39:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.505 13:39:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1081342' 00:10:07.505 killing process with pid 1081342 00:10:07.505 13:39:10 -- common/autotest_common.sh@955 -- # kill 1081342 00:10:07.505 13:39:10 -- common/autotest_common.sh@960 -- # wait 1081342 00:10:08.070 13:39:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:08.070 13:39:10 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:08.070 00:10:08.070 real 0m16.302s 00:10:08.070 user 0m48.294s 00:10:08.070 sys 0m3.228s 00:10:08.070 13:39:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:08.070 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 ************************************ 00:10:08.070 END TEST nvmf_delete_subsystem 00:10:08.070 ************************************ 00:10:08.070 13:39:10 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:10:08.070 13:39:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:08.070 13:39:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.070 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 ************************************ 00:10:08.070 START TEST nvmf_ns_masking 00:10:08.070 ************************************ 00:10:08.070 13:39:10 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:10:08.070 * Looking for test storage... 00:10:08.070 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:08.070 13:39:10 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.327 13:39:10 -- nvmf/common.sh@7 -- # uname -s 00:10:08.327 13:39:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.327 13:39:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.327 13:39:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.327 13:39:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.327 13:39:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.327 13:39:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.327 13:39:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.327 13:39:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.327 13:39:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.327 13:39:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.327 13:39:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:10:08.327 13:39:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:10:08.328 13:39:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.328 13:39:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.328 13:39:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.328 13:39:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.328 13:39:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:08.328 13:39:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.328 13:39:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.328 13:39:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.328 13:39:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.328 13:39:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.328 13:39:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.328 13:39:10 -- paths/export.sh@5 -- # export PATH 00:10:08.328 13:39:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.328 13:39:10 -- nvmf/common.sh@47 -- # : 0 00:10:08.328 13:39:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.328 13:39:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.328 13:39:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.328 13:39:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.328 13:39:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.328 13:39:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.328 13:39:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.328 13:39:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.328 13:39:10 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:08.328 13:39:10 -- target/ns_masking.sh@11 -- # loops=5 00:10:08.328 13:39:10 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:08.328 13:39:10 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:08.328 13:39:10 -- target/ns_masking.sh@15 -- # uuidgen 00:10:08.328 13:39:10 -- target/ns_masking.sh@15 -- # HOSTID=b58fce28-8576-458f-9c3f-9cbd0a87ca0e 00:10:08.328 13:39:10 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:08.328 13:39:10 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:08.328 13:39:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.328 13:39:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:08.328 13:39:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:08.328 13:39:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:08.328 13:39:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.328 13:39:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.328 13:39:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.328 13:39:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:08.328 13:39:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:08.328 13:39:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.328 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:10:10.855 13:39:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:10.855 13:39:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.855 13:39:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.855 13:39:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.855 13:39:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.855 13:39:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.855 13:39:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.855 13:39:13 -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.855 13:39:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.855 13:39:13 -- nvmf/common.sh@296 -- # e810=() 00:10:10.855 13:39:13 -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.855 13:39:13 -- nvmf/common.sh@297 -- # x722=() 00:10:10.855 13:39:13 -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.855 13:39:13 -- nvmf/common.sh@298 -- # mlx=() 00:10:10.855 13:39:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.855 13:39:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.855 13:39:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.855 13:39:13 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:10.855 13:39:13 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:10.855 13:39:13 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:10.855 13:39:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.855 13:39:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.855 13:39:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:10:10.855 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:10:10.855 13:39:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.855 13:39:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.855 13:39:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:10:10.855 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:10:10.855 13:39:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.855 13:39:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:10.856 13:39:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.856 13:39:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.856 13:39:13 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:10.856 13:39:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.856 13:39:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.856 13:39:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:10.856 13:39:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.856 13:39:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:10:10.856 Found net devices under 0000:81:00.0: mlx_0_0 00:10:10.856 13:39:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.856 13:39:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.856 13:39:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.856 13:39:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:10.856 13:39:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.856 13:39:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:10:10.856 Found net devices under 0000:81:00.1: mlx_0_1 00:10:10.856 13:39:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.856 13:39:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:10.856 13:39:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:10.856 13:39:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:10.856 13:39:13 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:10.856 13:39:13 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:10.856 13:39:13 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:10.856 13:39:13 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:10.856 13:39:13 -- nvmf/common.sh@58 -- # uname 00:10:10.856 13:39:13 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:10.856 13:39:13 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:10.856 13:39:13 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:10.856 13:39:13 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:10.856 13:39:13 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:10.856 13:39:13 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:10.856 13:39:13 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:10.856 13:39:13 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:10.856 13:39:13 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:10.856 13:39:13 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:10.856 13:39:13 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:10.856 13:39:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.856 13:39:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:10.856 13:39:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:10.856 13:39:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.114 13:39:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.114 13:39:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@105 -- # continue 2 00:10:11.114 13:39:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@105 -- # continue 2 00:10:11.114 13:39:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.114 13:39:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.114 13:39:13 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:11.114 13:39:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:11.114 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.114 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:10:11.114 altname enp129s0f0np0 00:10:11.114 inet 192.168.100.8/24 scope global mlx_0_0 00:10:11.114 valid_lft forever preferred_lft forever 00:10:11.114 13:39:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.114 13:39:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.114 13:39:13 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:11.114 13:39:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:11.114 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.114 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:10:11.114 altname enp129s0f1np1 00:10:11.114 inet 192.168.100.9/24 scope global mlx_0_1 00:10:11.114 valid_lft forever preferred_lft forever 00:10:11.114 13:39:13 -- nvmf/common.sh@411 -- # return 0 00:10:11.114 13:39:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:11.114 13:39:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:11.114 13:39:13 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:11.114 13:39:13 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:11.114 13:39:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.114 13:39:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:11.114 13:39:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:11.114 13:39:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.114 13:39:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.114 13:39:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@105 -- # continue 2 00:10:11.114 13:39:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.114 13:39:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.114 13:39:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@105 -- # continue 2 00:10:11.114 13:39:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.114 13:39:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.114 13:39:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.114 13:39:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.114 13:39:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.114 13:39:13 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:11.114 192.168.100.9' 00:10:11.114 13:39:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:11.114 192.168.100.9' 00:10:11.114 13:39:13 -- nvmf/common.sh@446 -- # head -n 1 00:10:11.114 13:39:13 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:11.114 13:39:13 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:11.114 192.168.100.9' 00:10:11.114 13:39:13 -- nvmf/common.sh@447 -- # tail -n +2 00:10:11.114 13:39:13 -- nvmf/common.sh@447 -- # head -n 1 00:10:11.114 13:39:13 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:11.114 13:39:13 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:11.114 13:39:13 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:11.114 13:39:13 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:11.114 13:39:13 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:11.114 13:39:13 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:11.114 13:39:13 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:11.114 13:39:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:11.114 13:39:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:11.114 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.114 13:39:13 -- nvmf/common.sh@470 -- # nvmfpid=1085539 00:10:11.114 13:39:13 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.114 13:39:13 -- nvmf/common.sh@471 -- # waitforlisten 1085539 00:10:11.114 13:39:13 -- common/autotest_common.sh@817 -- # '[' -z 1085539 ']' 00:10:11.114 13:39:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.114 13:39:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:11.114 13:39:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.114 13:39:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:11.114 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:10:11.114 [2024-04-18 13:39:13.807105] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:11.114 [2024-04-18 13:39:13.807220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.114 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.114 [2024-04-18 13:39:13.894267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.372 [2024-04-18 13:39:14.020241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.372 [2024-04-18 13:39:14.020302] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.372 [2024-04-18 13:39:14.020319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.372 [2024-04-18 13:39:14.020332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.372 [2024-04-18 13:39:14.020344] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.372 [2024-04-18 13:39:14.020404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.372 [2024-04-18 13:39:14.020460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.372 [2024-04-18 13:39:14.020513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.372 [2024-04-18 13:39:14.020516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.372 13:39:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:11.372 13:39:14 -- common/autotest_common.sh@850 -- # return 0 00:10:11.372 13:39:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:11.372 13:39:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:11.372 13:39:14 -- common/autotest_common.sh@10 -- # set +x 00:10:11.630 13:39:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.630 13:39:14 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:11.888 [2024-04-18 13:39:14.605073] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x830090/0x834580) succeed. 00:10:11.888 [2024-04-18 13:39:14.617202] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x831680/0x875c10) succeed. 00:10:12.146 13:39:14 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:12.146 13:39:14 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:12.146 13:39:14 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:12.711 Malloc1 00:10:12.711 13:39:15 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:12.969 Malloc2 00:10:12.969 13:39:15 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.226 13:39:15 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:13.483 13:39:16 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:14.046 [2024-04-18 13:39:16.587316] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:14.046 13:39:16 -- target/ns_masking.sh@61 -- # connect 00:10:14.046 13:39:16 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b58fce28-8576-458f-9c3f-9cbd0a87ca0e -a 192.168.100.8 -s 4420 -i 4 00:10:14.303 13:39:16 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.303 13:39:16 -- common/autotest_common.sh@1184 -- # local i=0 00:10:14.303 13:39:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.303 13:39:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:14.303 13:39:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:16.207 13:39:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:16.207 13:39:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:16.207 13:39:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.207 13:39:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:16.207 13:39:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.207 13:39:18 -- common/autotest_common.sh@1194 -- # return 0 00:10:16.207 13:39:18 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:16.207 13:39:18 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:16.207 13:39:18 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:16.207 13:39:18 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:16.207 13:39:18 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:16.207 13:39:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.207 13:39:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:16.207 [ 0]:0x1 00:10:16.207 13:39:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.207 13:39:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.465 13:39:19 -- target/ns_masking.sh@40 -- # nguid=424e4a14c6d34035af9d104aff54a405 00:10:16.465 13:39:19 -- target/ns_masking.sh@41 -- # [[ 424e4a14c6d34035af9d104aff54a405 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.465 13:39:19 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:16.722 13:39:19 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:16.722 13:39:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.722 13:39:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:16.722 [ 0]:0x1 00:10:16.722 13:39:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.722 13:39:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.980 13:39:19 -- target/ns_masking.sh@40 -- # nguid=424e4a14c6d34035af9d104aff54a405 00:10:16.980 13:39:19 -- target/ns_masking.sh@41 -- # [[ 424e4a14c6d34035af9d104aff54a405 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.980 13:39:19 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:16.980 13:39:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:16.980 13:39:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:16.980 [ 1]:0x2 00:10:16.980 13:39:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:16.980 13:39:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:16.980 13:39:19 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:16.980 13:39:19 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.980 13:39:19 -- target/ns_masking.sh@69 -- # disconnect 00:10:16.980 13:39:19 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.237 13:39:19 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.802 13:39:20 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:18.366 13:39:20 -- target/ns_masking.sh@77 -- # connect 1 00:10:18.366 13:39:20 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b58fce28-8576-458f-9c3f-9cbd0a87ca0e -a 192.168.100.8 -s 4420 -i 4 00:10:18.624 13:39:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:18.624 13:39:21 -- common/autotest_common.sh@1184 -- # local i=0 00:10:18.624 13:39:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.624 13:39:21 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:10:18.624 13:39:21 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:10:18.624 13:39:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:20.520 13:39:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:20.520 13:39:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:20.520 13:39:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.520 13:39:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:20.520 13:39:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.520 13:39:23 -- common/autotest_common.sh@1194 -- # return 0 00:10:20.520 13:39:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:20.520 13:39:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:20.520 13:39:23 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:20.520 13:39:23 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:20.520 13:39:23 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:20.520 13:39:23 -- common/autotest_common.sh@638 -- # local es=0 00:10:20.520 13:39:23 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:20.520 13:39:23 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:20.520 13:39:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:20.520 13:39:23 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:20.520 13:39:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:20.520 13:39:23 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:20.520 13:39:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:20.520 13:39:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:20.520 13:39:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:20.520 13:39:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:20.520 13:39:23 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:20.520 13:39:23 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.520 13:39:23 -- common/autotest_common.sh@641 -- # es=1 00:10:20.520 13:39:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:20.520 13:39:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:20.520 13:39:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:20.520 13:39:23 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:20.520 13:39:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:20.520 13:39:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:20.777 [ 0]:0x2 00:10:20.777 13:39:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:20.777 13:39:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:20.777 13:39:23 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:20.777 13:39:23 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:20.777 13:39:23 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:21.034 13:39:23 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:21.035 13:39:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.035 13:39:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:21.035 [ 0]:0x1 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # nguid=424e4a14c6d34035af9d104aff54a405 00:10:21.035 13:39:23 -- target/ns_masking.sh@41 -- # [[ 424e4a14c6d34035af9d104aff54a405 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.035 13:39:23 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:21.035 13:39:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.035 13:39:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:21.035 [ 1]:0x2 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.035 13:39:23 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:21.035 13:39:23 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.035 13:39:23 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:21.599 13:39:24 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:21.599 13:39:24 -- common/autotest_common.sh@638 -- # local es=0 00:10:21.599 13:39:24 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:21.599 13:39:24 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:21.599 13:39:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:21.599 13:39:24 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:21.599 13:39:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:21.599 13:39:24 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:21.599 13:39:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.599 13:39:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:21.599 13:39:24 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.599 13:39:24 -- common/autotest_common.sh@641 -- # es=1 00:10:21.599 13:39:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:21.599 13:39:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:21.599 13:39:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:21.599 13:39:24 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:21.599 13:39:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.599 13:39:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:21.599 [ 0]:0x2 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.599 13:39:24 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:21.599 13:39:24 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.599 13:39:24 -- target/ns_masking.sh@91 -- # disconnect 00:10:21.599 13:39:24 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.856 13:39:24 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:22.422 13:39:24 -- target/ns_masking.sh@95 -- # connect 2 00:10:22.422 13:39:24 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b58fce28-8576-458f-9c3f-9cbd0a87ca0e -a 192.168.100.8 -s 4420 -i 4 00:10:22.680 13:39:25 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:22.680 13:39:25 -- common/autotest_common.sh@1184 -- # local i=0 00:10:22.680 13:39:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.680 13:39:25 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:22.681 13:39:25 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:22.681 13:39:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:24.578 13:39:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:24.578 13:39:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:24.578 13:39:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.578 13:39:27 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:24.578 13:39:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.578 13:39:27 -- common/autotest_common.sh@1194 -- # return 0 00:10:24.578 13:39:27 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:24.578 13:39:27 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:24.578 13:39:27 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:24.578 13:39:27 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:24.578 13:39:27 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:24.578 13:39:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.578 13:39:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:24.578 [ 0]:0x1 00:10:24.578 13:39:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.578 13:39:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.835 13:39:27 -- target/ns_masking.sh@40 -- # nguid=424e4a14c6d34035af9d104aff54a405 00:10:24.835 13:39:27 -- target/ns_masking.sh@41 -- # [[ 424e4a14c6d34035af9d104aff54a405 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.835 13:39:27 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:24.835 13:39:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.835 13:39:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:24.835 [ 1]:0x2 00:10:24.836 13:39:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:24.836 13:39:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.836 13:39:27 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:24.836 13:39:27 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.836 13:39:27 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:25.093 13:39:27 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:25.093 13:39:27 -- common/autotest_common.sh@638 -- # local es=0 00:10:25.093 13:39:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:25.093 13:39:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:25.093 13:39:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.093 13:39:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:25.093 13:39:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.093 13:39:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:25.093 13:39:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:25.093 13:39:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:25.093 13:39:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:25.093 13:39:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:25.093 13:39:27 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:25.093 13:39:27 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:25.093 13:39:27 -- common/autotest_common.sh@641 -- # es=1 00:10:25.093 13:39:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:25.093 13:39:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:25.093 13:39:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:25.093 13:39:27 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:25.093 13:39:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:25.093 13:39:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:25.093 [ 0]:0x2 00:10:25.093 13:39:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:25.093 13:39:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:25.351 13:39:27 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:25.351 13:39:27 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:25.351 13:39:27 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:25.351 13:39:27 -- common/autotest_common.sh@638 -- # local es=0 00:10:25.351 13:39:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:25.351 13:39:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:25.351 13:39:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.351 13:39:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:25.351 13:39:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.351 13:39:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:25.351 13:39:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.351 13:39:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:25.351 13:39:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:25.351 13:39:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:25.609 [2024-04-18 13:39:28.241335] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:25.609 request: 00:10:25.609 { 00:10:25.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.609 "nsid": 2, 00:10:25.609 "host": "nqn.2016-06.io.spdk:host1", 00:10:25.609 "method": "nvmf_ns_remove_host", 00:10:25.609 "req_id": 1 00:10:25.609 } 00:10:25.609 Got JSON-RPC error response 00:10:25.609 response: 00:10:25.609 { 00:10:25.609 "code": -32602, 00:10:25.609 "message": "Invalid parameters" 00:10:25.609 } 00:10:25.609 13:39:28 -- common/autotest_common.sh@641 -- # es=1 00:10:25.609 13:39:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:25.609 13:39:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:25.609 13:39:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:25.609 13:39:28 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:25.609 13:39:28 -- common/autotest_common.sh@638 -- # local es=0 00:10:25.609 13:39:28 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:25.609 13:39:28 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:25.609 13:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.609 13:39:28 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:25.609 13:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:25.609 13:39:28 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:25.609 13:39:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:25.609 13:39:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:25.609 13:39:28 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:25.609 13:39:28 -- common/autotest_common.sh@641 -- # es=1 00:10:25.609 13:39:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:25.609 13:39:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:25.609 13:39:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:25.609 13:39:28 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:25.609 13:39:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:25.609 13:39:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:25.609 [ 0]:0x2 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:25.609 13:39:28 -- target/ns_masking.sh@40 -- # nguid=0dd8f5cbd50f4467ac89d44020907a56 00:10:25.610 13:39:28 -- target/ns_masking.sh@41 -- # [[ 0dd8f5cbd50f4467ac89d44020907a56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:25.610 13:39:28 -- target/ns_masking.sh@108 -- # disconnect 00:10:25.610 13:39:28 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.175 13:39:28 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.434 13:39:29 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:26.434 13:39:29 -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:26.434 13:39:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:26.434 13:39:29 -- nvmf/common.sh@117 -- # sync 00:10:26.434 13:39:29 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:26.434 13:39:29 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:26.434 13:39:29 -- nvmf/common.sh@120 -- # set +e 00:10:26.434 13:39:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.434 13:39:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:26.434 rmmod nvme_rdma 00:10:26.434 rmmod nvme_fabrics 00:10:26.434 13:39:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.434 13:39:29 -- nvmf/common.sh@124 -- # set -e 00:10:26.434 13:39:29 -- nvmf/common.sh@125 -- # return 0 00:10:26.434 13:39:29 -- nvmf/common.sh@478 -- # '[' -n 1085539 ']' 00:10:26.434 13:39:29 -- nvmf/common.sh@479 -- # killprocess 1085539 00:10:26.434 13:39:29 -- common/autotest_common.sh@936 -- # '[' -z 1085539 ']' 00:10:26.434 13:39:29 -- common/autotest_common.sh@940 -- # kill -0 1085539 00:10:26.434 13:39:29 -- common/autotest_common.sh@941 -- # uname 00:10:26.434 13:39:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:26.434 13:39:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1085539 00:10:26.434 13:39:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:26.434 13:39:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:26.434 13:39:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1085539' 00:10:26.434 killing process with pid 1085539 00:10:26.434 13:39:29 -- common/autotest_common.sh@955 -- # kill 1085539 00:10:26.434 13:39:29 -- common/autotest_common.sh@960 -- # wait 1085539 00:10:27.001 13:39:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:27.001 13:39:29 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:27.001 00:10:27.001 real 0m18.852s 00:10:27.001 user 1m8.429s 00:10:27.001 sys 0m3.490s 00:10:27.001 13:39:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:27.001 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:10:27.001 ************************************ 00:10:27.001 END TEST nvmf_ns_masking 00:10:27.001 ************************************ 00:10:27.001 13:39:29 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:27.001 13:39:29 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:27.001 13:39:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:27.001 13:39:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.001 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:10:27.001 ************************************ 00:10:27.001 START TEST nvmf_nvme_cli 00:10:27.001 ************************************ 00:10:27.001 13:39:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:27.259 * Looking for test storage... 00:10:27.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:27.259 13:39:29 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.259 13:39:29 -- nvmf/common.sh@7 -- # uname -s 00:10:27.259 13:39:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.259 13:39:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.259 13:39:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.259 13:39:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.259 13:39:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.259 13:39:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.259 13:39:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.259 13:39:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.259 13:39:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.259 13:39:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.259 13:39:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:10:27.259 13:39:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:10:27.259 13:39:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.259 13:39:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.259 13:39:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.259 13:39:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.259 13:39:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:27.259 13:39:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.259 13:39:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.259 13:39:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.259 13:39:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.259 13:39:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.259 13:39:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.259 13:39:29 -- paths/export.sh@5 -- # export PATH 00:10:27.259 13:39:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.259 13:39:29 -- nvmf/common.sh@47 -- # : 0 00:10:27.259 13:39:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.259 13:39:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.259 13:39:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.259 13:39:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.259 13:39:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.259 13:39:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.259 13:39:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.259 13:39:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.260 13:39:29 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.260 13:39:29 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.260 13:39:29 -- target/nvme_cli.sh@14 -- # devs=() 00:10:27.260 13:39:29 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:27.260 13:39:29 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:27.260 13:39:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.260 13:39:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:27.260 13:39:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:27.260 13:39:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:27.260 13:39:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.260 13:39:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.260 13:39:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.260 13:39:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:27.260 13:39:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:27.260 13:39:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.260 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:10:29.788 13:39:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:29.788 13:39:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.788 13:39:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.788 13:39:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.788 13:39:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.788 13:39:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.788 13:39:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.788 13:39:32 -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.788 13:39:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.788 13:39:32 -- nvmf/common.sh@296 -- # e810=() 00:10:29.788 13:39:32 -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.788 13:39:32 -- nvmf/common.sh@297 -- # x722=() 00:10:29.788 13:39:32 -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.788 13:39:32 -- nvmf/common.sh@298 -- # mlx=() 00:10:29.788 13:39:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.788 13:39:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.788 13:39:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:10:29.788 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:10:29.788 13:39:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.788 13:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:10:29.788 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:10:29.788 13:39:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.788 13:39:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.788 13:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.788 13:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:10:29.788 Found net devices under 0000:81:00.0: mlx_0_0 00:10:29.788 13:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.788 13:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.788 13:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:10:29.788 Found net devices under 0000:81:00.1: mlx_0_1 00:10:29.788 13:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.788 13:39:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:29.788 13:39:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:29.788 13:39:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:29.788 13:39:32 -- nvmf/common.sh@58 -- # uname 00:10:29.788 13:39:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:29.788 13:39:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:29.788 13:39:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:29.788 13:39:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:29.788 13:39:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:29.788 13:39:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:29.788 13:39:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:29.788 13:39:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:29.788 13:39:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:29.788 13:39:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.788 13:39:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:29.788 13:39:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.788 13:39:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:29.788 13:39:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:29.788 13:39:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.788 13:39:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:29.788 13:39:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:29.788 13:39:32 -- nvmf/common.sh@105 -- # continue 2 00:10:29.788 13:39:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.788 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:29.788 13:39:32 -- nvmf/common.sh@105 -- # continue 2 00:10:29.788 13:39:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:29.788 13:39:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:29.788 13:39:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:29.788 13:39:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:29.788 13:39:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.788 13:39:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.788 13:39:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:29.788 13:39:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:29.788 13:39:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:29.788 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.788 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:10:29.788 altname enp129s0f0np0 00:10:29.789 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.789 valid_lft forever preferred_lft forever 00:10:29.789 13:39:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:29.789 13:39:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.789 13:39:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:29.789 13:39:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:29.789 13:39:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:29.789 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.789 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:10:29.789 altname enp129s0f1np1 00:10:29.789 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.789 valid_lft forever preferred_lft forever 00:10:29.789 13:39:32 -- nvmf/common.sh@411 -- # return 0 00:10:29.789 13:39:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:29.789 13:39:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.789 13:39:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:29.789 13:39:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:29.789 13:39:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:29.789 13:39:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.789 13:39:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:29.789 13:39:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:29.789 13:39:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.789 13:39:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:29.789 13:39:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.789 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.789 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.789 13:39:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:29.789 13:39:32 -- nvmf/common.sh@105 -- # continue 2 00:10:29.789 13:39:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:29.789 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.789 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.789 13:39:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.789 13:39:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.789 13:39:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@105 -- # continue 2 00:10:29.789 13:39:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:29.789 13:39:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:29.789 13:39:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.789 13:39:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:29.789 13:39:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:29.789 13:39:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:29.789 13:39:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.789 192.168.100.9' 00:10:29.789 13:39:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:29.789 192.168.100.9' 00:10:29.789 13:39:32 -- nvmf/common.sh@446 -- # head -n 1 00:10:29.789 13:39:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:29.789 13:39:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:29.789 192.168.100.9' 00:10:29.789 13:39:32 -- nvmf/common.sh@447 -- # tail -n +2 00:10:29.789 13:39:32 -- nvmf/common.sh@447 -- # head -n 1 00:10:29.789 13:39:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:29.789 13:39:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:29.789 13:39:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:29.789 13:39:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:29.789 13:39:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:29.789 13:39:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:29.789 13:39:32 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:29.789 13:39:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:29.789 13:39:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:29.789 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:10:29.789 13:39:32 -- nvmf/common.sh@470 -- # nvmfpid=1089631 00:10:29.789 13:39:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.789 13:39:32 -- nvmf/common.sh@471 -- # waitforlisten 1089631 00:10:29.789 13:39:32 -- common/autotest_common.sh@817 -- # '[' -z 1089631 ']' 00:10:29.789 13:39:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.789 13:39:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:29.789 13:39:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.789 13:39:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:29.789 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:10:30.048 [2024-04-18 13:39:32.608391] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:30.048 [2024-04-18 13:39:32.608474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.048 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.048 [2024-04-18 13:39:32.688135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.048 [2024-04-18 13:39:32.813301] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.048 [2024-04-18 13:39:32.813366] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.048 [2024-04-18 13:39:32.813382] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.048 [2024-04-18 13:39:32.813396] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.048 [2024-04-18 13:39:32.813408] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.048 [2024-04-18 13:39:32.813502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.048 [2024-04-18 13:39:32.813559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.048 [2024-04-18 13:39:32.816962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.048 [2024-04-18 13:39:32.816976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.306 13:39:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:30.306 13:39:32 -- common/autotest_common.sh@850 -- # return 0 00:10:30.306 13:39:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:30.306 13:39:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:30.306 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 13:39:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.306 13:39:32 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:30.306 13:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.306 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 [2024-04-18 13:39:33.016775] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24a1090/0x24a5580) succeed. 00:10:30.306 [2024-04-18 13:39:33.029064] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a2680/0x24e6c10) succeed. 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 Malloc0 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 Malloc1 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 [2024-04-18 13:39:33.275249] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:30.564 13:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.564 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.564 13:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.564 13:39:33 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 4420 00:10:30.824 00:10:30.824 Discovery Log Number of Records 2, Generation counter 2 00:10:30.824 =====Discovery Log Entry 0====== 00:10:30.824 trtype: rdma 00:10:30.824 adrfam: ipv4 00:10:30.824 subtype: current discovery subsystem 00:10:30.824 treq: not required 00:10:30.824 portid: 0 00:10:30.824 trsvcid: 4420 00:10:30.825 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:30.825 traddr: 192.168.100.8 00:10:30.825 eflags: explicit discovery connections, duplicate discovery information 00:10:30.825 rdma_prtype: not specified 00:10:30.825 rdma_qptype: connected 00:10:30.825 rdma_cms: rdma-cm 00:10:30.825 rdma_pkey: 0x0000 00:10:30.825 =====Discovery Log Entry 1====== 00:10:30.825 trtype: rdma 00:10:30.825 adrfam: ipv4 00:10:30.825 subtype: nvme subsystem 00:10:30.825 treq: not required 00:10:30.825 portid: 0 00:10:30.825 trsvcid: 4420 00:10:30.825 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:30.825 traddr: 192.168.100.8 00:10:30.825 eflags: none 00:10:30.825 rdma_prtype: not specified 00:10:30.825 rdma_qptype: connected 00:10:30.825 rdma_cms: rdma-cm 00:10:30.825 rdma_pkey: 0x0000 00:10:30.825 13:39:33 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:30.825 13:39:33 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:30.825 13:39:33 -- nvmf/common.sh@511 -- # local dev _ 00:10:30.825 13:39:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:30.825 13:39:33 -- nvmf/common.sh@510 -- # nvme list 00:10:30.825 13:39:33 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:30.825 13:39:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:30.825 13:39:33 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:30.825 13:39:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:30.825 13:39:33 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:30.825 13:39:33 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:31.783 13:39:34 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:31.783 13:39:34 -- common/autotest_common.sh@1184 -- # local i=0 00:10:31.783 13:39:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.783 13:39:34 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:31.783 13:39:34 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:31.783 13:39:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:33.682 13:39:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:33.940 13:39:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:33.940 13:39:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.940 13:39:36 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:33.940 13:39:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.940 13:39:36 -- common/autotest_common.sh@1194 -- # return 0 00:10:33.940 13:39:36 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:33.940 13:39:36 -- nvmf/common.sh@511 -- # local dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@510 -- # nvme list 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:33.940 /dev/nvme0n1 ]] 00:10:33.940 13:39:36 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:33.940 13:39:36 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:33.940 13:39:36 -- nvmf/common.sh@511 -- # local dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@510 -- # nvme list 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:33.940 13:39:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:33.940 13:39:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:33.940 13:39:36 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:33.940 13:39:36 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.873 13:39:37 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.873 13:39:37 -- common/autotest_common.sh@1205 -- # local i=0 00:10:34.873 13:39:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:34.873 13:39:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.873 13:39:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:34.873 13:39:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.873 13:39:37 -- common/autotest_common.sh@1217 -- # return 0 00:10:34.873 13:39:37 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:34.873 13:39:37 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.873 13:39:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:34.873 13:39:37 -- common/autotest_common.sh@10 -- # set +x 00:10:34.873 13:39:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:34.873 13:39:37 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:34.873 13:39:37 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:34.873 13:39:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:34.873 13:39:37 -- nvmf/common.sh@117 -- # sync 00:10:34.873 13:39:37 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:34.873 13:39:37 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:34.873 13:39:37 -- nvmf/common.sh@120 -- # set +e 00:10:34.873 13:39:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.873 13:39:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:34.873 rmmod nvme_rdma 00:10:35.130 rmmod nvme_fabrics 00:10:35.131 13:39:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.131 13:39:37 -- nvmf/common.sh@124 -- # set -e 00:10:35.131 13:39:37 -- nvmf/common.sh@125 -- # return 0 00:10:35.131 13:39:37 -- nvmf/common.sh@478 -- # '[' -n 1089631 ']' 00:10:35.131 13:39:37 -- nvmf/common.sh@479 -- # killprocess 1089631 00:10:35.131 13:39:37 -- common/autotest_common.sh@936 -- # '[' -z 1089631 ']' 00:10:35.131 13:39:37 -- common/autotest_common.sh@940 -- # kill -0 1089631 00:10:35.131 13:39:37 -- common/autotest_common.sh@941 -- # uname 00:10:35.131 13:39:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:35.131 13:39:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1089631 00:10:35.131 13:39:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:35.131 13:39:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:35.131 13:39:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1089631' 00:10:35.131 killing process with pid 1089631 00:10:35.131 13:39:37 -- common/autotest_common.sh@955 -- # kill 1089631 00:10:35.131 13:39:37 -- common/autotest_common.sh@960 -- # wait 1089631 00:10:35.388 13:39:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:35.388 13:39:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:35.388 00:10:35.388 real 0m8.385s 00:10:35.388 user 0m21.813s 00:10:35.388 sys 0m2.466s 00:10:35.388 13:39:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.388 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 ************************************ 00:10:35.388 END TEST nvmf_nvme_cli 00:10:35.388 ************************************ 00:10:35.388 13:39:38 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:35.388 13:39:38 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:35.388 13:39:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:35.388 13:39:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.388 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:10:35.647 ************************************ 00:10:35.647 START TEST nvmf_host_management 00:10:35.647 ************************************ 00:10:35.647 13:39:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:35.647 * Looking for test storage... 00:10:35.647 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:35.647 13:39:38 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.647 13:39:38 -- nvmf/common.sh@7 -- # uname -s 00:10:35.647 13:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.647 13:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.647 13:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.647 13:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.647 13:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.647 13:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.647 13:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.647 13:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.647 13:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.647 13:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.647 13:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:10:35.647 13:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:10:35.647 13:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.647 13:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.647 13:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.647 13:39:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.647 13:39:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:35.647 13:39:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.647 13:39:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.647 13:39:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.647 13:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.647 13:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.647 13:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.647 13:39:38 -- paths/export.sh@5 -- # export PATH 00:10:35.647 13:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.647 13:39:38 -- nvmf/common.sh@47 -- # : 0 00:10:35.647 13:39:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.647 13:39:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.647 13:39:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.647 13:39:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.647 13:39:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.647 13:39:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.647 13:39:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.647 13:39:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.647 13:39:38 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.647 13:39:38 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.647 13:39:38 -- target/host_management.sh@105 -- # nvmftestinit 00:10:35.647 13:39:38 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:35.647 13:39:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.647 13:39:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:35.647 13:39:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:35.647 13:39:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:35.647 13:39:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.647 13:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.647 13:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.647 13:39:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:35.647 13:39:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:35.647 13:39:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.647 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:10:38.926 13:39:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:38.926 13:39:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.926 13:39:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.926 13:39:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.926 13:39:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.926 13:39:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.926 13:39:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.926 13:39:41 -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.926 13:39:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.926 13:39:41 -- nvmf/common.sh@296 -- # e810=() 00:10:38.926 13:39:41 -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.926 13:39:41 -- nvmf/common.sh@297 -- # x722=() 00:10:38.926 13:39:41 -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.926 13:39:41 -- nvmf/common.sh@298 -- # mlx=() 00:10:38.926 13:39:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.926 13:39:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.926 13:39:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:10:38.926 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:10:38.926 13:39:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.926 13:39:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:10:38.926 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:10:38.926 13:39:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.926 13:39:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.926 13:39:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.926 13:39:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:10:38.926 Found net devices under 0000:81:00.0: mlx_0_0 00:10:38.926 13:39:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.926 13:39:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.926 13:39:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:10:38.926 Found net devices under 0000:81:00.1: mlx_0_1 00:10:38.926 13:39:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.926 13:39:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:38.926 13:39:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:38.926 13:39:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:38.926 13:39:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:38.926 13:39:41 -- nvmf/common.sh@58 -- # uname 00:10:38.926 13:39:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:38.926 13:39:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:38.926 13:39:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:38.926 13:39:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:38.926 13:39:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:38.926 13:39:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:38.926 13:39:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:38.926 13:39:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:38.926 13:39:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:38.926 13:39:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:38.926 13:39:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:38.926 13:39:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.926 13:39:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:38.926 13:39:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:38.926 13:39:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.926 13:39:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:38.926 13:39:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.926 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@105 -- # continue 2 00:10:38.927 13:39:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@105 -- # continue 2 00:10:38.927 13:39:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:38.927 13:39:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.927 13:39:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:38.927 13:39:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:38.927 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.927 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:10:38.927 altname enp129s0f0np0 00:10:38.927 inet 192.168.100.8/24 scope global mlx_0_0 00:10:38.927 valid_lft forever preferred_lft forever 00:10:38.927 13:39:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:38.927 13:39:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.927 13:39:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:38.927 13:39:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:38.927 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.927 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:10:38.927 altname enp129s0f1np1 00:10:38.927 inet 192.168.100.9/24 scope global mlx_0_1 00:10:38.927 valid_lft forever preferred_lft forever 00:10:38.927 13:39:41 -- nvmf/common.sh@411 -- # return 0 00:10:38.927 13:39:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:38.927 13:39:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:38.927 13:39:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:38.927 13:39:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:38.927 13:39:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.927 13:39:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:38.927 13:39:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:38.927 13:39:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.927 13:39:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:38.927 13:39:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@105 -- # continue 2 00:10:38.927 13:39:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.927 13:39:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.927 13:39:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@105 -- # continue 2 00:10:38.927 13:39:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:38.927 13:39:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.927 13:39:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:38.927 13:39:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:38.927 13:39:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:38.927 13:39:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:38.927 192.168.100.9' 00:10:38.927 13:39:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:38.927 192.168.100.9' 00:10:38.927 13:39:41 -- nvmf/common.sh@446 -- # head -n 1 00:10:38.927 13:39:41 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:38.927 13:39:41 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:38.927 192.168.100.9' 00:10:38.927 13:39:41 -- nvmf/common.sh@447 -- # tail -n +2 00:10:38.927 13:39:41 -- nvmf/common.sh@447 -- # head -n 1 00:10:38.927 13:39:41 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:38.927 13:39:41 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:38.927 13:39:41 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:38.927 13:39:41 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:38.927 13:39:41 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:38.927 13:39:41 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:38.927 13:39:41 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:38.927 13:39:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:38.927 13:39:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.927 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:10:38.927 ************************************ 00:10:38.927 START TEST nvmf_host_management 00:10:38.927 ************************************ 00:10:38.927 13:39:41 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:38.927 13:39:41 -- target/host_management.sh@69 -- # starttarget 00:10:38.927 13:39:41 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:38.927 13:39:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:38.927 13:39:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:38.927 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:10:38.927 13:39:41 -- nvmf/common.sh@470 -- # nvmfpid=1092312 00:10:38.927 13:39:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:38.927 13:39:41 -- nvmf/common.sh@471 -- # waitforlisten 1092312 00:10:38.927 13:39:41 -- common/autotest_common.sh@817 -- # '[' -z 1092312 ']' 00:10:38.927 13:39:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.927 13:39:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:38.927 13:39:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.927 13:39:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:38.927 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:10:38.927 [2024-04-18 13:39:41.432125] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:38.927 [2024-04-18 13:39:41.432211] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.927 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.927 [2024-04-18 13:39:41.517599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.927 [2024-04-18 13:39:41.657039] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.927 [2024-04-18 13:39:41.657096] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.927 [2024-04-18 13:39:41.657112] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.927 [2024-04-18 13:39:41.657125] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.927 [2024-04-18 13:39:41.657137] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.927 [2024-04-18 13:39:41.657204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.927 [2024-04-18 13:39:41.657258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.927 [2024-04-18 13:39:41.657322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:38.928 [2024-04-18 13:39:41.657326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.186 13:39:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:39.186 13:39:41 -- common/autotest_common.sh@850 -- # return 0 00:10:39.186 13:39:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:39.186 13:39:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:39.186 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 13:39:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.186 13:39:41 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:39.186 13:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.186 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 [2024-04-18 13:39:41.859842] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a06350/0x1a0a840) succeed. 00:10:39.186 [2024-04-18 13:39:41.871918] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a07940/0x1a4bed0) succeed. 00:10:39.445 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.445 13:39:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:39.445 13:39:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:39.445 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.445 13:39:42 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:39.445 13:39:42 -- target/host_management.sh@23 -- # cat 00:10:39.445 13:39:42 -- target/host_management.sh@30 -- # rpc_cmd 00:10:39.445 13:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.445 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.445 Malloc0 00:10:39.445 [2024-04-18 13:39:42.084722] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:39.445 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.445 13:39:42 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:39.445 13:39:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:39.445 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.445 13:39:42 -- target/host_management.sh@73 -- # perfpid=1092483 00:10:39.445 13:39:42 -- target/host_management.sh@74 -- # waitforlisten 1092483 /var/tmp/bdevperf.sock 00:10:39.445 13:39:42 -- common/autotest_common.sh@817 -- # '[' -z 1092483 ']' 00:10:39.445 13:39:42 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:39.445 13:39:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:39.445 13:39:42 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:39.445 13:39:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:39.445 13:39:42 -- nvmf/common.sh@521 -- # config=() 00:10:39.445 13:39:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:39.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:39.445 13:39:42 -- nvmf/common.sh@521 -- # local subsystem config 00:10:39.445 13:39:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:39.445 13:39:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:39.445 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.445 13:39:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:39.445 { 00:10:39.445 "params": { 00:10:39.445 "name": "Nvme$subsystem", 00:10:39.445 "trtype": "$TEST_TRANSPORT", 00:10:39.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.445 "adrfam": "ipv4", 00:10:39.445 "trsvcid": "$NVMF_PORT", 00:10:39.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.445 "hdgst": ${hdgst:-false}, 00:10:39.445 "ddgst": ${ddgst:-false} 00:10:39.445 }, 00:10:39.445 "method": "bdev_nvme_attach_controller" 00:10:39.445 } 00:10:39.445 EOF 00:10:39.445 )") 00:10:39.445 13:39:42 -- nvmf/common.sh@543 -- # cat 00:10:39.445 13:39:42 -- nvmf/common.sh@545 -- # jq . 00:10:39.445 13:39:42 -- nvmf/common.sh@546 -- # IFS=, 00:10:39.445 13:39:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:39.445 "params": { 00:10:39.445 "name": "Nvme0", 00:10:39.445 "trtype": "rdma", 00:10:39.445 "traddr": "192.168.100.8", 00:10:39.445 "adrfam": "ipv4", 00:10:39.445 "trsvcid": "4420", 00:10:39.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:39.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:39.445 "hdgst": false, 00:10:39.445 "ddgst": false 00:10:39.445 }, 00:10:39.445 "method": "bdev_nvme_attach_controller" 00:10:39.445 }' 00:10:39.445 [2024-04-18 13:39:42.157507] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:39.445 [2024-04-18 13:39:42.157594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092483 ] 00:10:39.445 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.445 [2024-04-18 13:39:42.238039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.703 [2024-04-18 13:39:42.361419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.961 Running I/O for 10 seconds... 00:10:39.961 13:39:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:39.961 13:39:42 -- common/autotest_common.sh@850 -- # return 0 00:10:39.961 13:39:42 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:39.961 13:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.961 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.961 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.961 13:39:42 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.961 13:39:42 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:39.961 13:39:42 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:39.961 13:39:42 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:39.961 13:39:42 -- target/host_management.sh@52 -- # local ret=1 00:10:39.961 13:39:42 -- target/host_management.sh@53 -- # local i 00:10:39.961 13:39:42 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:39.961 13:39:42 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:39.961 13:39:42 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:39.961 13:39:42 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:39.961 13:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.961 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.961 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.961 13:39:42 -- target/host_management.sh@55 -- # read_io_count=109 00:10:39.961 13:39:42 -- target/host_management.sh@58 -- # '[' 109 -ge 100 ']' 00:10:39.961 13:39:42 -- target/host_management.sh@59 -- # ret=0 00:10:39.961 13:39:42 -- target/host_management.sh@60 -- # break 00:10:39.961 13:39:42 -- target/host_management.sh@64 -- # return 0 00:10:39.961 13:39:42 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:39.961 13:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.961 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.961 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.961 13:39:42 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:39.961 13:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.961 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:10:39.961 [2024-04-18 13:39:42.712184] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 4 00:10:39.961 13:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.961 13:39:42 -- target/host_management.sh@87 -- # sleep 1 00:10:41.347 [2024-04-18 13:39:43.716900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.716953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.716990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:10:41.347 [2024-04-18 13:39:43.717402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001928d400 len:0x10000 key:0x182700 00:10:41.347 [2024-04-18 13:39:43.717433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001927d380 len:0x10000 key:0x182700 00:10:41.347 [2024-04-18 13:39:43.717465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.347 [2024-04-18 13:39:43.717483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001926d300 len:0x10000 key:0x182700 00:10:41.347 [2024-04-18 13:39:43.717498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001925d280 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001924d200 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001923d180 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001922d100 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001921d080 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001920d000 len:0x10000 key:0x182700 00:10:41.348 [2024-04-18 13:39:43.717693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:10:41.348 [2024-04-18 13:39:43.717725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:10:41.348 [2024-04-18 13:39:43.717758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:10:41.348 [2024-04-18 13:39:43.717790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:10:41.348 [2024-04-18 13:39:43.717822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 [2024-04-18 13:39:43.717840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:10:41.348 13:39:43 -- target/host_management.sh@91 -- # kill -9 1092483 00:10:41.348 [2024-04-18 13:39:43.717855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2870 p:0 m:0 dnr:0 00:10:41.348 13:39:43 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:41.348 13:39:43 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:41.348 13:39:43 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:41.348 13:39:43 -- nvmf/common.sh@521 -- # config=() 00:10:41.348 13:39:43 -- nvmf/common.sh@521 -- # local subsystem config 00:10:41.348 13:39:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:41.348 13:39:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:41.348 { 00:10:41.348 "params": { 00:10:41.348 "name": "Nvme$subsystem", 00:10:41.348 "trtype": "$TEST_TRANSPORT", 00:10:41.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.348 "adrfam": "ipv4", 00:10:41.348 "trsvcid": "$NVMF_PORT", 00:10:41.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.348 "hdgst": ${hdgst:-false}, 00:10:41.348 "ddgst": ${ddgst:-false} 00:10:41.348 }, 00:10:41.348 "method": "bdev_nvme_attach_controller" 00:10:41.348 } 00:10:41.348 EOF 00:10:41.348 )") 00:10:41.348 13:39:43 -- nvmf/common.sh@543 -- # cat 00:10:41.348 13:39:43 -- nvmf/common.sh@545 -- # jq . 00:10:41.348 13:39:43 -- nvmf/common.sh@546 -- # IFS=, 00:10:41.348 13:39:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:41.348 "params": { 00:10:41.348 "name": "Nvme0", 00:10:41.348 "trtype": "rdma", 00:10:41.348 "traddr": "192.168.100.8", 00:10:41.348 "adrfam": "ipv4", 00:10:41.348 "trsvcid": "4420", 00:10:41.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:41.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:41.348 "hdgst": false, 00:10:41.348 "ddgst": false 00:10:41.348 }, 00:10:41.348 "method": "bdev_nvme_attach_controller" 00:10:41.348 }' 00:10:41.348 [2024-04-18 13:39:43.774683] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:41.348 [2024-04-18 13:39:43.774785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092629 ] 00:10:41.348 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.348 [2024-04-18 13:39:43.864336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.348 [2024-04-18 13:39:43.987778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.655 Running I/O for 1 seconds... 00:10:42.589 00:10:42.589 Latency(us) 00:10:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.589 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:42.589 Verification LBA range: start 0x0 length 0x400 00:10:42.589 Nvme0n1 : 1.02 2303.58 143.97 0.00 0.00 27117.57 1990.35 48351.00 00:10:42.589 =================================================================================================================== 00:10:42.589 Total : 2303.58 143.97 0.00 0.00 27117.57 1990.35 48351.00 00:10:42.846 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1092483 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:42.846 13:39:45 -- target/host_management.sh@102 -- # stoptarget 00:10:42.846 13:39:45 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:42.846 13:39:45 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:42.846 13:39:45 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:42.846 13:39:45 -- target/host_management.sh@40 -- # nvmftestfini 00:10:42.846 13:39:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:42.846 13:39:45 -- nvmf/common.sh@117 -- # sync 00:10:42.846 13:39:45 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:42.846 13:39:45 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:42.846 13:39:45 -- nvmf/common.sh@120 -- # set +e 00:10:42.846 13:39:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.846 13:39:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:42.846 rmmod nvme_rdma 00:10:42.846 rmmod nvme_fabrics 00:10:42.846 13:39:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.846 13:39:45 -- nvmf/common.sh@124 -- # set -e 00:10:42.846 13:39:45 -- nvmf/common.sh@125 -- # return 0 00:10:42.846 13:39:45 -- nvmf/common.sh@478 -- # '[' -n 1092312 ']' 00:10:42.846 13:39:45 -- nvmf/common.sh@479 -- # killprocess 1092312 00:10:42.846 13:39:45 -- common/autotest_common.sh@936 -- # '[' -z 1092312 ']' 00:10:42.847 13:39:45 -- common/autotest_common.sh@940 -- # kill -0 1092312 00:10:42.847 13:39:45 -- common/autotest_common.sh@941 -- # uname 00:10:42.847 13:39:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:42.847 13:39:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092312 00:10:42.847 13:39:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:42.847 13:39:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:42.847 13:39:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092312' 00:10:42.847 killing process with pid 1092312 00:10:42.847 13:39:45 -- common/autotest_common.sh@955 -- # kill 1092312 00:10:42.847 13:39:45 -- common/autotest_common.sh@960 -- # wait 1092312 00:10:43.411 [2024-04-18 13:39:46.014720] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:43.411 13:39:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:43.411 13:39:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:43.411 00:10:43.411 real 0m4.658s 00:10:43.411 user 0m20.487s 00:10:43.411 sys 0m1.032s 00:10:43.411 13:39:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.411 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:10:43.411 ************************************ 00:10:43.411 END TEST nvmf_host_management 00:10:43.411 ************************************ 00:10:43.411 13:39:46 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:43.411 00:10:43.411 real 0m7.773s 00:10:43.411 user 0m21.556s 00:10:43.411 sys 0m3.193s 00:10:43.411 13:39:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.411 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:10:43.411 ************************************ 00:10:43.411 END TEST nvmf_host_management 00:10:43.411 ************************************ 00:10:43.411 13:39:46 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:43.411 13:39:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:43.411 13:39:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.411 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:10:43.411 ************************************ 00:10:43.411 START TEST nvmf_lvol 00:10:43.411 ************************************ 00:10:43.411 13:39:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:43.668 * Looking for test storage... 00:10:43.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.668 13:39:46 -- nvmf/common.sh@7 -- # uname -s 00:10:43.668 13:39:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.668 13:39:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.668 13:39:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.668 13:39:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.668 13:39:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.668 13:39:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.668 13:39:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.668 13:39:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.668 13:39:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.668 13:39:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.668 13:39:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:10:43.668 13:39:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:10:43.668 13:39:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.668 13:39:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.668 13:39:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.668 13:39:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.668 13:39:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:43.668 13:39:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.668 13:39:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.668 13:39:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.668 13:39:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.668 13:39:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.668 13:39:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.668 13:39:46 -- paths/export.sh@5 -- # export PATH 00:10:43.668 13:39:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.668 13:39:46 -- nvmf/common.sh@47 -- # : 0 00:10:43.668 13:39:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.668 13:39:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.668 13:39:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.668 13:39:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.668 13:39:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.668 13:39:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.668 13:39:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.668 13:39:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:43.668 13:39:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:43.668 13:39:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:43.668 13:39:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.668 13:39:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:43.668 13:39:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:43.668 13:39:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:43.668 13:39:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.668 13:39:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.668 13:39:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.668 13:39:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:43.668 13:39:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:43.668 13:39:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.668 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:10:46.947 13:39:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:46.947 13:39:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.947 13:39:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.947 13:39:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.947 13:39:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.947 13:39:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.947 13:39:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.947 13:39:49 -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.947 13:39:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.947 13:39:49 -- nvmf/common.sh@296 -- # e810=() 00:10:46.947 13:39:49 -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.947 13:39:49 -- nvmf/common.sh@297 -- # x722=() 00:10:46.947 13:39:49 -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.947 13:39:49 -- nvmf/common.sh@298 -- # mlx=() 00:10:46.947 13:39:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.947 13:39:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.947 13:39:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:10:46.947 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:10:46.947 13:39:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.947 13:39:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:10:46.947 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:10:46.947 13:39:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.947 13:39:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.947 13:39:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.947 13:39:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:10:46.947 Found net devices under 0000:81:00.0: mlx_0_0 00:10:46.947 13:39:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.947 13:39:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.947 13:39:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:10:46.947 Found net devices under 0000:81:00.1: mlx_0_1 00:10:46.947 13:39:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.947 13:39:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:46.947 13:39:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:46.947 13:39:49 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:46.947 13:39:49 -- nvmf/common.sh@58 -- # uname 00:10:46.947 13:39:49 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:46.947 13:39:49 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:46.947 13:39:49 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:46.947 13:39:49 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:46.947 13:39:49 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:46.947 13:39:49 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:46.947 13:39:49 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:46.947 13:39:49 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:46.947 13:39:49 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:46.947 13:39:49 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:46.947 13:39:49 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:46.947 13:39:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.947 13:39:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.947 13:39:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.947 13:39:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.947 13:39:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.947 13:39:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.947 13:39:49 -- nvmf/common.sh@105 -- # continue 2 00:10:46.947 13:39:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.947 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.947 13:39:49 -- nvmf/common.sh@105 -- # continue 2 00:10:46.947 13:39:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.947 13:39:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:46.947 13:39:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.947 13:39:49 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:46.947 13:39:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:46.947 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.947 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:10:46.947 altname enp129s0f0np0 00:10:46.947 inet 192.168.100.8/24 scope global mlx_0_0 00:10:46.947 valid_lft forever preferred_lft forever 00:10:46.947 13:39:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.947 13:39:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:46.947 13:39:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.947 13:39:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.947 13:39:49 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:46.947 13:39:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:46.947 13:39:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:46.947 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.947 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:10:46.947 altname enp129s0f1np1 00:10:46.947 inet 192.168.100.9/24 scope global mlx_0_1 00:10:46.947 valid_lft forever preferred_lft forever 00:10:46.947 13:39:49 -- nvmf/common.sh@411 -- # return 0 00:10:46.947 13:39:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:46.947 13:39:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:46.947 13:39:49 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:46.948 13:39:49 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:46.948 13:39:49 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:46.948 13:39:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.948 13:39:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.948 13:39:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.948 13:39:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.948 13:39:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.948 13:39:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.948 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.948 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.948 13:39:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.948 13:39:49 -- nvmf/common.sh@105 -- # continue 2 00:10:46.948 13:39:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.948 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.948 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.948 13:39:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.948 13:39:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.948 13:39:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.948 13:39:49 -- nvmf/common.sh@105 -- # continue 2 00:10:46.948 13:39:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.948 13:39:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:46.948 13:39:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.948 13:39:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.948 13:39:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:46.948 13:39:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.948 13:39:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.948 13:39:49 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:46.948 192.168.100.9' 00:10:46.948 13:39:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:46.948 192.168.100.9' 00:10:46.948 13:39:49 -- nvmf/common.sh@446 -- # head -n 1 00:10:46.948 13:39:49 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:46.948 13:39:49 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:46.948 192.168.100.9' 00:10:46.948 13:39:49 -- nvmf/common.sh@447 -- # tail -n +2 00:10:46.948 13:39:49 -- nvmf/common.sh@447 -- # head -n 1 00:10:46.948 13:39:49 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:46.948 13:39:49 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:46.948 13:39:49 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:46.948 13:39:49 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:46.948 13:39:49 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:46.948 13:39:49 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:46.948 13:39:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:46.948 13:39:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:46.948 13:39:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:46.948 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:10:46.948 13:39:49 -- nvmf/common.sh@470 -- # nvmfpid=1094995 00:10:46.948 13:39:49 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:46.948 13:39:49 -- nvmf/common.sh@471 -- # waitforlisten 1094995 00:10:46.948 13:39:49 -- common/autotest_common.sh@817 -- # '[' -z 1094995 ']' 00:10:46.948 13:39:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.948 13:39:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:46.948 13:39:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.948 13:39:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:46.948 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:10:46.948 [2024-04-18 13:39:49.406362] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:46.948 [2024-04-18 13:39:49.406450] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.948 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.948 [2024-04-18 13:39:49.485545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:46.948 [2024-04-18 13:39:49.609868] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.948 [2024-04-18 13:39:49.609935] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.948 [2024-04-18 13:39:49.609961] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.948 [2024-04-18 13:39:49.609975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.948 [2024-04-18 13:39:49.609987] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.948 [2024-04-18 13:39:49.610048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.948 [2024-04-18 13:39:49.610103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.948 [2024-04-18 13:39:49.610107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.948 13:39:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:46.948 13:39:49 -- common/autotest_common.sh@850 -- # return 0 00:10:46.948 13:39:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:46.948 13:39:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:46.948 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:10:47.206 13:39:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.206 13:39:49 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:47.463 [2024-04-18 13:39:50.131413] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fcf4f0/0x1fd39e0) succeed. 00:10:47.463 [2024-04-18 13:39:50.143557] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd0a40/0x2015070) succeed. 00:10:47.720 13:39:50 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.978 13:39:50 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:47.978 13:39:50 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.236 13:39:50 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:48.236 13:39:50 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:48.492 13:39:51 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:48.749 13:39:51 -- target/nvmf_lvol.sh@29 -- # lvs=cebfd74c-3b98-4dc5-98d4-4f1cc15e9f7f 00:10:48.749 13:39:51 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cebfd74c-3b98-4dc5-98d4-4f1cc15e9f7f lvol 20 00:10:49.006 13:39:51 -- target/nvmf_lvol.sh@32 -- # lvol=70e7e598-b81d-4711-a965-2cc598a6f867 00:10:49.006 13:39:51 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.264 13:39:52 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 70e7e598-b81d-4711-a965-2cc598a6f867 00:10:49.829 13:39:52 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:50.087 [2024-04-18 13:39:52.699686] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:50.087 13:39:52 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:50.345 13:39:53 -- target/nvmf_lvol.sh@42 -- # perf_pid=1095426 00:10:50.345 13:39:53 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:50.345 13:39:53 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:50.345 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.278 13:39:54 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 70e7e598-b81d-4711-a965-2cc598a6f867 MY_SNAPSHOT 00:10:51.843 13:39:54 -- target/nvmf_lvol.sh@47 -- # snapshot=e70e6e3c-b046-4b29-887d-cd83dbb49f79 00:10:51.843 13:39:54 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 70e7e598-b81d-4711-a965-2cc598a6f867 30 00:10:52.101 13:39:54 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e70e6e3c-b046-4b29-887d-cd83dbb49f79 MY_CLONE 00:10:52.359 13:39:55 -- target/nvmf_lvol.sh@49 -- # clone=ff1e8f05-29f4-4c71-979a-3c9fa31a090c 00:10:52.359 13:39:55 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ff1e8f05-29f4-4c71-979a-3c9fa31a090c 00:10:52.927 13:39:55 -- target/nvmf_lvol.sh@53 -- # wait 1095426 00:11:02.905 Initializing NVMe Controllers 00:11:02.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:02.905 Controller IO queue size 128, less than required. 00:11:02.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:02.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:02.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:02.905 Initialization complete. Launching workers. 00:11:02.905 ======================================================== 00:11:02.905 Latency(us) 00:11:02.905 Device Information : IOPS MiB/s Average min max 00:11:02.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14888.10 58.16 8600.03 3337.15 48075.36 00:11:02.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14884.10 58.14 8602.71 3329.79 51411.11 00:11:02.905 ======================================================== 00:11:02.905 Total : 29772.20 116.30 8601.37 3329.79 51411.11 00:11:02.905 00:11:02.905 13:40:04 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:02.905 13:40:04 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 70e7e598-b81d-4711-a965-2cc598a6f867 00:11:02.905 13:40:05 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cebfd74c-3b98-4dc5-98d4-4f1cc15e9f7f 00:11:03.165 13:40:05 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:03.165 13:40:05 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:03.165 13:40:05 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:03.165 13:40:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:03.165 13:40:05 -- nvmf/common.sh@117 -- # sync 00:11:03.165 13:40:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:03.165 13:40:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:03.165 13:40:05 -- nvmf/common.sh@120 -- # set +e 00:11:03.165 13:40:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.165 13:40:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:03.165 rmmod nvme_rdma 00:11:03.165 rmmod nvme_fabrics 00:11:03.165 13:40:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.165 13:40:05 -- nvmf/common.sh@124 -- # set -e 00:11:03.165 13:40:05 -- nvmf/common.sh@125 -- # return 0 00:11:03.165 13:40:05 -- nvmf/common.sh@478 -- # '[' -n 1094995 ']' 00:11:03.165 13:40:05 -- nvmf/common.sh@479 -- # killprocess 1094995 00:11:03.165 13:40:05 -- common/autotest_common.sh@936 -- # '[' -z 1094995 ']' 00:11:03.165 13:40:05 -- common/autotest_common.sh@940 -- # kill -0 1094995 00:11:03.165 13:40:05 -- common/autotest_common.sh@941 -- # uname 00:11:03.165 13:40:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.165 13:40:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1094995 00:11:03.165 13:40:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:03.165 13:40:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:03.165 13:40:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1094995' 00:11:03.165 killing process with pid 1094995 00:11:03.165 13:40:05 -- common/autotest_common.sh@955 -- # kill 1094995 00:11:03.165 13:40:05 -- common/autotest_common.sh@960 -- # wait 1094995 00:11:03.805 13:40:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:03.805 13:40:06 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:03.805 00:11:03.805 real 0m20.141s 00:11:03.805 user 1m17.673s 00:11:03.805 sys 0m3.734s 00:11:03.805 13:40:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.805 13:40:06 -- common/autotest_common.sh@10 -- # set +x 00:11:03.805 ************************************ 00:11:03.805 END TEST nvmf_lvol 00:11:03.805 ************************************ 00:11:03.805 13:40:06 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:03.805 13:40:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.805 13:40:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.805 13:40:06 -- common/autotest_common.sh@10 -- # set +x 00:11:03.805 ************************************ 00:11:03.805 START TEST nvmf_lvs_grow 00:11:03.805 ************************************ 00:11:03.805 13:40:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:03.805 * Looking for test storage... 00:11:03.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.806 13:40:06 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.806 13:40:06 -- nvmf/common.sh@7 -- # uname -s 00:11:03.806 13:40:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.806 13:40:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.806 13:40:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.806 13:40:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.806 13:40:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.806 13:40:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.806 13:40:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.806 13:40:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.806 13:40:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.806 13:40:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.806 13:40:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:11:03.806 13:40:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:11:03.806 13:40:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.806 13:40:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.806 13:40:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.806 13:40:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.806 13:40:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.806 13:40:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.806 13:40:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.806 13:40:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.806 13:40:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.806 13:40:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.806 13:40:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.806 13:40:06 -- paths/export.sh@5 -- # export PATH 00:11:03.806 13:40:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.806 13:40:06 -- nvmf/common.sh@47 -- # : 0 00:11:03.806 13:40:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.806 13:40:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.806 13:40:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.806 13:40:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.806 13:40:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.806 13:40:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.806 13:40:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.806 13:40:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.806 13:40:06 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.806 13:40:06 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:03.806 13:40:06 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:03.806 13:40:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:03.806 13:40:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.806 13:40:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:03.806 13:40:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:03.806 13:40:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:03.806 13:40:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.806 13:40:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.806 13:40:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.806 13:40:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:03.806 13:40:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:03.806 13:40:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.806 13:40:06 -- common/autotest_common.sh@10 -- # set +x 00:11:07.087 13:40:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:07.087 13:40:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.087 13:40:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.087 13:40:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.087 13:40:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.087 13:40:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.087 13:40:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.087 13:40:09 -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.087 13:40:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.087 13:40:09 -- nvmf/common.sh@296 -- # e810=() 00:11:07.087 13:40:09 -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.087 13:40:09 -- nvmf/common.sh@297 -- # x722=() 00:11:07.087 13:40:09 -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.087 13:40:09 -- nvmf/common.sh@298 -- # mlx=() 00:11:07.087 13:40:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.087 13:40:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.087 13:40:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.087 13:40:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.087 13:40:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:11:07.087 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:11:07.087 13:40:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.087 13:40:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.087 13:40:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:11:07.087 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:11:07.087 13:40:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.087 13:40:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.087 13:40:09 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.087 13:40:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.087 13:40:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:07.087 13:40:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.087 13:40:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:11:07.087 Found net devices under 0000:81:00.0: mlx_0_0 00:11:07.087 13:40:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.087 13:40:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.087 13:40:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:07.087 13:40:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.087 13:40:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:11:07.087 Found net devices under 0000:81:00.1: mlx_0_1 00:11:07.087 13:40:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.087 13:40:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:07.087 13:40:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:07.087 13:40:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:07.087 13:40:09 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:07.087 13:40:09 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:07.087 13:40:09 -- nvmf/common.sh@58 -- # uname 00:11:07.087 13:40:09 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:07.087 13:40:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:07.087 13:40:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:07.087 13:40:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:07.087 13:40:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:07.087 13:40:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:07.088 13:40:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:07.088 13:40:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:07.088 13:40:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:07.088 13:40:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.088 13:40:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:07.088 13:40:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.088 13:40:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.088 13:40:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.088 13:40:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.088 13:40:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.088 13:40:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@105 -- # continue 2 00:11:07.088 13:40:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@105 -- # continue 2 00:11:07.088 13:40:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.088 13:40:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.088 13:40:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:07.088 13:40:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:07.088 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.088 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:11:07.088 altname enp129s0f0np0 00:11:07.088 inet 192.168.100.8/24 scope global mlx_0_0 00:11:07.088 valid_lft forever preferred_lft forever 00:11:07.088 13:40:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.088 13:40:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.088 13:40:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:07.088 13:40:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:07.088 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.088 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:11:07.088 altname enp129s0f1np1 00:11:07.088 inet 192.168.100.9/24 scope global mlx_0_1 00:11:07.088 valid_lft forever preferred_lft forever 00:11:07.088 13:40:09 -- nvmf/common.sh@411 -- # return 0 00:11:07.088 13:40:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:07.088 13:40:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:07.088 13:40:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:07.088 13:40:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:07.088 13:40:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.088 13:40:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.088 13:40:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.088 13:40:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.088 13:40:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.088 13:40:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@105 -- # continue 2 00:11:07.088 13:40:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.088 13:40:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.088 13:40:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@105 -- # continue 2 00:11:07.088 13:40:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.088 13:40:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.088 13:40:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.088 13:40:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.088 13:40:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.088 13:40:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:07.088 192.168.100.9' 00:11:07.088 13:40:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:07.088 192.168.100.9' 00:11:07.088 13:40:09 -- nvmf/common.sh@446 -- # head -n 1 00:11:07.088 13:40:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:07.088 13:40:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:07.088 192.168.100.9' 00:11:07.088 13:40:09 -- nvmf/common.sh@447 -- # tail -n +2 00:11:07.088 13:40:09 -- nvmf/common.sh@447 -- # head -n 1 00:11:07.088 13:40:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:07.088 13:40:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:07.088 13:40:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:07.088 13:40:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:07.088 13:40:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:07.088 13:40:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:07.088 13:40:09 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:07.088 13:40:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:07.088 13:40:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:07.088 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.088 13:40:09 -- nvmf/common.sh@470 -- # nvmfpid=1099093 00:11:07.088 13:40:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:07.088 13:40:09 -- nvmf/common.sh@471 -- # waitforlisten 1099093 00:11:07.088 13:40:09 -- common/autotest_common.sh@817 -- # '[' -z 1099093 ']' 00:11:07.088 13:40:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.088 13:40:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:07.088 13:40:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.088 13:40:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:07.088 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.088 [2024-04-18 13:40:09.461052] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:07.088 [2024-04-18 13:40:09.461151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.088 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.088 [2024-04-18 13:40:09.546221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.088 [2024-04-18 13:40:09.667257] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.088 [2024-04-18 13:40:09.667322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.088 [2024-04-18 13:40:09.667340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.088 [2024-04-18 13:40:09.667354] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.088 [2024-04-18 13:40:09.667366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.088 [2024-04-18 13:40:09.667399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.088 13:40:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:07.088 13:40:09 -- common/autotest_common.sh@850 -- # return 0 00:11:07.088 13:40:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:07.088 13:40:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:07.089 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.089 13:40:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.089 13:40:09 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:07.654 [2024-04-18 13:40:10.240337] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x87af50/0x87f440) succeed. 00:11:07.654 [2024-04-18 13:40:10.252424] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x87c450/0x8c0ad0) succeed. 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:07.654 13:40:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:07.654 13:40:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.654 13:40:10 -- common/autotest_common.sh@10 -- # set +x 00:11:07.654 ************************************ 00:11:07.654 START TEST lvs_grow_clean 00:11:07.654 ************************************ 00:11:07.654 13:40:10 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.654 13:40:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:08.253 13:40:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:08.253 13:40:10 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:08.511 13:40:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3720157d-ad56-4719-8b68-0d4cba961d03 00:11:08.511 13:40:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:08.511 13:40:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:08.768 13:40:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:08.768 13:40:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:08.768 13:40:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3720157d-ad56-4719-8b68-0d4cba961d03 lvol 150 00:11:09.025 13:40:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a3cf14c-694b-47b9-8867-f1bd8fadca80 00:11:09.025 13:40:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:09.025 13:40:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:09.282 [2024-04-18 13:40:11.916627] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:09.282 [2024-04-18 13:40:11.916733] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:09.282 true 00:11:09.282 13:40:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:09.282 13:40:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:09.540 13:40:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:09.540 13:40:12 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:09.797 13:40:12 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a3cf14c-694b-47b9-8867-f1bd8fadca80 00:11:10.055 13:40:12 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:10.312 [2024-04-18 13:40:13.052282] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:10.313 13:40:13 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:10.878 13:40:13 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1099547 00:11:10.878 13:40:13 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:10.878 13:40:13 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:10.878 13:40:13 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1099547 /var/tmp/bdevperf.sock 00:11:10.878 13:40:13 -- common/autotest_common.sh@817 -- # '[' -z 1099547 ']' 00:11:10.878 13:40:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:10.878 13:40:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:10.878 13:40:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:10.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:10.878 13:40:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:10.878 13:40:13 -- common/autotest_common.sh@10 -- # set +x 00:11:10.878 [2024-04-18 13:40:13.458683] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:10.878 [2024-04-18 13:40:13.458772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099547 ] 00:11:10.878 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.878 [2024-04-18 13:40:13.535689] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.878 [2024-04-18 13:40:13.655476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.136 13:40:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.136 13:40:13 -- common/autotest_common.sh@850 -- # return 0 00:11:11.136 13:40:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:11.393 Nvme0n1 00:11:11.393 13:40:14 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:11.957 [ 00:11:11.957 { 00:11:11.957 "name": "Nvme0n1", 00:11:11.957 "aliases": [ 00:11:11.957 "0a3cf14c-694b-47b9-8867-f1bd8fadca80" 00:11:11.957 ], 00:11:11.957 "product_name": "NVMe disk", 00:11:11.957 "block_size": 4096, 00:11:11.957 "num_blocks": 38912, 00:11:11.957 "uuid": "0a3cf14c-694b-47b9-8867-f1bd8fadca80", 00:11:11.957 "assigned_rate_limits": { 00:11:11.957 "rw_ios_per_sec": 0, 00:11:11.957 "rw_mbytes_per_sec": 0, 00:11:11.957 "r_mbytes_per_sec": 0, 00:11:11.957 "w_mbytes_per_sec": 0 00:11:11.957 }, 00:11:11.957 "claimed": false, 00:11:11.957 "zoned": false, 00:11:11.957 "supported_io_types": { 00:11:11.957 "read": true, 00:11:11.957 "write": true, 00:11:11.957 "unmap": true, 00:11:11.957 "write_zeroes": true, 00:11:11.957 "flush": true, 00:11:11.957 "reset": true, 00:11:11.957 "compare": true, 00:11:11.957 "compare_and_write": true, 00:11:11.957 "abort": true, 00:11:11.957 "nvme_admin": true, 00:11:11.957 "nvme_io": true 00:11:11.957 }, 00:11:11.957 "memory_domains": [ 00:11:11.957 { 00:11:11.957 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:11.957 "dma_device_type": 0 00:11:11.957 } 00:11:11.957 ], 00:11:11.957 "driver_specific": { 00:11:11.957 "nvme": [ 00:11:11.957 { 00:11:11.957 "trid": { 00:11:11.957 "trtype": "RDMA", 00:11:11.957 "adrfam": "IPv4", 00:11:11.957 "traddr": "192.168.100.8", 00:11:11.957 "trsvcid": "4420", 00:11:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:11.957 }, 00:11:11.957 "ctrlr_data": { 00:11:11.957 "cntlid": 1, 00:11:11.957 "vendor_id": "0x8086", 00:11:11.957 "model_number": "SPDK bdev Controller", 00:11:11.957 "serial_number": "SPDK0", 00:11:11.957 "firmware_revision": "24.05", 00:11:11.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:11.958 "oacs": { 00:11:11.958 "security": 0, 00:11:11.958 "format": 0, 00:11:11.958 "firmware": 0, 00:11:11.958 "ns_manage": 0 00:11:11.958 }, 00:11:11.958 "multi_ctrlr": true, 00:11:11.958 "ana_reporting": false 00:11:11.958 }, 00:11:11.958 "vs": { 00:11:11.958 "nvme_version": "1.3" 00:11:11.958 }, 00:11:11.958 "ns_data": { 00:11:11.958 "id": 1, 00:11:11.958 "can_share": true 00:11:11.958 } 00:11:11.958 } 00:11:11.958 ], 00:11:11.958 "mp_policy": "active_passive" 00:11:11.958 } 00:11:11.958 } 00:11:11.958 ] 00:11:11.958 13:40:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1099686 00:11:11.958 13:40:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:11.958 13:40:14 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:12.215 Running I/O for 10 seconds... 00:11:13.147 Latency(us) 00:11:13.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.147 Nvme0n1 : 1.00 21569.00 84.25 0.00 0.00 0.00 0.00 0.00 00:11:13.147 =================================================================================================================== 00:11:13.147 Total : 21569.00 84.25 0.00 0.00 0.00 0.00 0.00 00:11:13.147 00:11:14.080 13:40:16 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:14.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.080 Nvme0n1 : 2.00 21889.00 85.50 0.00 0.00 0.00 0.00 0.00 00:11:14.080 =================================================================================================================== 00:11:14.080 Total : 21889.00 85.50 0.00 0.00 0.00 0.00 0.00 00:11:14.080 00:11:14.080 true 00:11:14.080 13:40:16 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:14.080 13:40:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:14.644 13:40:17 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:14.644 13:40:17 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:14.644 13:40:17 -- target/nvmf_lvs_grow.sh@65 -- # wait 1099686 00:11:15.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.209 Nvme0n1 : 3.00 22090.67 86.29 0.00 0.00 0.00 0.00 0.00 00:11:15.209 =================================================================================================================== 00:11:15.209 Total : 22090.67 86.29 0.00 0.00 0.00 0.00 0.00 00:11:15.209 00:11:16.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.140 Nvme0n1 : 4.00 22231.75 86.84 0.00 0.00 0.00 0.00 0.00 00:11:16.140 =================================================================================================================== 00:11:16.140 Total : 22231.75 86.84 0.00 0.00 0.00 0.00 0.00 00:11:16.140 00:11:17.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.074 Nvme0n1 : 5.00 22329.00 87.22 0.00 0.00 0.00 0.00 0.00 00:11:17.074 =================================================================================================================== 00:11:17.074 Total : 22329.00 87.22 0.00 0.00 0.00 0.00 0.00 00:11:17.074 00:11:18.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.006 Nvme0n1 : 6.00 22352.50 87.31 0.00 0.00 0.00 0.00 0.00 00:11:18.006 =================================================================================================================== 00:11:18.006 Total : 22352.50 87.31 0.00 0.00 0.00 0.00 0.00 00:11:18.006 00:11:19.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.379 Nvme0n1 : 7.00 22405.14 87.52 0.00 0.00 0.00 0.00 0.00 00:11:19.379 =================================================================================================================== 00:11:19.379 Total : 22405.14 87.52 0.00 0.00 0.00 0.00 0.00 00:11:19.379 00:11:20.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.341 Nvme0n1 : 8.00 22460.25 87.74 0.00 0.00 0.00 0.00 0.00 00:11:20.341 =================================================================================================================== 00:11:20.341 Total : 22460.25 87.74 0.00 0.00 0.00 0.00 0.00 00:11:20.341 00:11:21.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.274 Nvme0n1 : 9.00 22509.56 87.93 0.00 0.00 0.00 0.00 0.00 00:11:21.274 =================================================================================================================== 00:11:21.274 Total : 22509.56 87.93 0.00 0.00 0.00 0.00 0.00 00:11:21.274 00:11:22.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.221 Nvme0n1 : 10.00 22546.70 88.07 0.00 0.00 0.00 0.00 0.00 00:11:22.221 =================================================================================================================== 00:11:22.221 Total : 22546.70 88.07 0.00 0.00 0.00 0.00 0.00 00:11:22.221 00:11:22.221 00:11:22.221 Latency(us) 00:11:22.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.221 Nvme0n1 : 10.01 22547.49 88.08 0.00 0.00 5672.04 4077.80 12233.39 00:11:22.221 =================================================================================================================== 00:11:22.221 Total : 22547.49 88.08 0.00 0.00 5672.04 4077.80 12233.39 00:11:22.221 0 00:11:22.221 13:40:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1099547 00:11:22.221 13:40:24 -- common/autotest_common.sh@936 -- # '[' -z 1099547 ']' 00:11:22.221 13:40:24 -- common/autotest_common.sh@940 -- # kill -0 1099547 00:11:22.221 13:40:24 -- common/autotest_common.sh@941 -- # uname 00:11:22.221 13:40:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.221 13:40:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1099547 00:11:22.221 13:40:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:22.221 13:40:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:22.221 13:40:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1099547' 00:11:22.221 killing process with pid 1099547 00:11:22.221 13:40:24 -- common/autotest_common.sh@955 -- # kill 1099547 00:11:22.221 Received shutdown signal, test time was about 10.000000 seconds 00:11:22.221 00:11:22.221 Latency(us) 00:11:22.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.221 =================================================================================================================== 00:11:22.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:22.221 13:40:24 -- common/autotest_common.sh@960 -- # wait 1099547 00:11:22.479 13:40:25 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:22.737 13:40:25 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:22.737 13:40:25 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:23.302 13:40:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:23.302 13:40:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:23.302 13:40:25 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:23.559 [2024-04-18 13:40:26.175573] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:23.559 13:40:26 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:23.559 13:40:26 -- common/autotest_common.sh@638 -- # local es=0 00:11:23.559 13:40:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:23.559 13:40:26 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:23.559 13:40:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.559 13:40:26 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:23.559 13:40:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.559 13:40:26 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:23.559 13:40:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.559 13:40:26 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:23.559 13:40:26 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:23.559 13:40:26 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:23.817 request: 00:11:23.817 { 00:11:23.817 "uuid": "3720157d-ad56-4719-8b68-0d4cba961d03", 00:11:23.817 "method": "bdev_lvol_get_lvstores", 00:11:23.817 "req_id": 1 00:11:23.817 } 00:11:23.817 Got JSON-RPC error response 00:11:23.817 response: 00:11:23.817 { 00:11:23.817 "code": -19, 00:11:23.817 "message": "No such device" 00:11:23.817 } 00:11:23.817 13:40:26 -- common/autotest_common.sh@641 -- # es=1 00:11:23.817 13:40:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:23.817 13:40:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:23.817 13:40:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:23.817 13:40:26 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:24.075 aio_bdev 00:11:24.075 13:40:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0a3cf14c-694b-47b9-8867-f1bd8fadca80 00:11:24.075 13:40:26 -- common/autotest_common.sh@885 -- # local bdev_name=0a3cf14c-694b-47b9-8867-f1bd8fadca80 00:11:24.075 13:40:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:24.075 13:40:26 -- common/autotest_common.sh@887 -- # local i 00:11:24.075 13:40:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:24.075 13:40:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:24.075 13:40:26 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:24.332 13:40:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3cf14c-694b-47b9-8867-f1bd8fadca80 -t 2000 00:11:24.896 [ 00:11:24.896 { 00:11:24.896 "name": "0a3cf14c-694b-47b9-8867-f1bd8fadca80", 00:11:24.896 "aliases": [ 00:11:24.896 "lvs/lvol" 00:11:24.896 ], 00:11:24.896 "product_name": "Logical Volume", 00:11:24.896 "block_size": 4096, 00:11:24.896 "num_blocks": 38912, 00:11:24.896 "uuid": "0a3cf14c-694b-47b9-8867-f1bd8fadca80", 00:11:24.896 "assigned_rate_limits": { 00:11:24.896 "rw_ios_per_sec": 0, 00:11:24.897 "rw_mbytes_per_sec": 0, 00:11:24.897 "r_mbytes_per_sec": 0, 00:11:24.897 "w_mbytes_per_sec": 0 00:11:24.897 }, 00:11:24.897 "claimed": false, 00:11:24.897 "zoned": false, 00:11:24.897 "supported_io_types": { 00:11:24.897 "read": true, 00:11:24.897 "write": true, 00:11:24.897 "unmap": true, 00:11:24.897 "write_zeroes": true, 00:11:24.897 "flush": false, 00:11:24.897 "reset": true, 00:11:24.897 "compare": false, 00:11:24.897 "compare_and_write": false, 00:11:24.897 "abort": false, 00:11:24.897 "nvme_admin": false, 00:11:24.897 "nvme_io": false 00:11:24.897 }, 00:11:24.897 "driver_specific": { 00:11:24.897 "lvol": { 00:11:24.897 "lvol_store_uuid": "3720157d-ad56-4719-8b68-0d4cba961d03", 00:11:24.897 "base_bdev": "aio_bdev", 00:11:24.897 "thin_provision": false, 00:11:24.897 "snapshot": false, 00:11:24.897 "clone": false, 00:11:24.897 "esnap_clone": false 00:11:24.897 } 00:11:24.897 } 00:11:24.897 } 00:11:24.897 ] 00:11:24.897 13:40:27 -- common/autotest_common.sh@893 -- # return 0 00:11:24.897 13:40:27 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:24.897 13:40:27 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:25.154 13:40:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:25.154 13:40:27 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:25.154 13:40:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:25.411 13:40:28 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:25.411 13:40:28 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a3cf14c-694b-47b9-8867-f1bd8fadca80 00:11:25.668 13:40:28 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3720157d-ad56-4719-8b68-0d4cba961d03 00:11:26.233 13:40:28 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.491 00:11:26.491 real 0m18.701s 00:11:26.491 user 0m19.095s 00:11:26.491 sys 0m1.464s 00:11:26.491 13:40:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.491 13:40:29 -- common/autotest_common.sh@10 -- # set +x 00:11:26.491 ************************************ 00:11:26.491 END TEST lvs_grow_clean 00:11:26.491 ************************************ 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:26.491 13:40:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:26.491 13:40:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.491 13:40:29 -- common/autotest_common.sh@10 -- # set +x 00:11:26.491 ************************************ 00:11:26.491 START TEST lvs_grow_dirty 00:11:26.491 ************************************ 00:11:26.491 13:40:29 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:26.491 13:40:29 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.748 13:40:29 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.748 13:40:29 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:27.005 13:40:29 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:27.005 13:40:29 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:27.262 13:40:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=536d76d6-e877-4db0-afa6-61219d472f97 00:11:27.262 13:40:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:27.262 13:40:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:27.826 13:40:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:27.826 13:40:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:27.826 13:40:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 536d76d6-e877-4db0-afa6-61219d472f97 lvol 150 00:11:28.084 13:40:30 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:28.084 13:40:30 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:28.084 13:40:30 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:28.341 [2024-04-18 13:40:30.940017] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:28.341 [2024-04-18 13:40:30.940111] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:28.341 true 00:11:28.341 13:40:30 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:28.341 13:40:30 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:28.599 13:40:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:28.599 13:40:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:28.857 13:40:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:29.115 13:40:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:29.372 13:40:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:29.630 13:40:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1101855 00:11:29.630 13:40:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:29.630 13:40:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.630 13:40:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1101855 /var/tmp/bdevperf.sock 00:11:29.630 13:40:32 -- common/autotest_common.sh@817 -- # '[' -z 1101855 ']' 00:11:29.630 13:40:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.630 13:40:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.630 13:40:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.630 13:40:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.630 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:11:29.630 [2024-04-18 13:40:32.425518] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:29.630 [2024-04-18 13:40:32.425601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101855 ] 00:11:29.887 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.887 [2024-04-18 13:40:32.502661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.887 [2024-04-18 13:40:32.622768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.143 13:40:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.143 13:40:32 -- common/autotest_common.sh@850 -- # return 0 00:11:30.143 13:40:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:30.401 Nvme0n1 00:11:30.401 13:40:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:30.658 [ 00:11:30.658 { 00:11:30.658 "name": "Nvme0n1", 00:11:30.658 "aliases": [ 00:11:30.658 "7e0bb295-2ace-486e-b0c1-0eaf27a40512" 00:11:30.658 ], 00:11:30.658 "product_name": "NVMe disk", 00:11:30.658 "block_size": 4096, 00:11:30.658 "num_blocks": 38912, 00:11:30.658 "uuid": "7e0bb295-2ace-486e-b0c1-0eaf27a40512", 00:11:30.658 "assigned_rate_limits": { 00:11:30.658 "rw_ios_per_sec": 0, 00:11:30.658 "rw_mbytes_per_sec": 0, 00:11:30.658 "r_mbytes_per_sec": 0, 00:11:30.658 "w_mbytes_per_sec": 0 00:11:30.658 }, 00:11:30.658 "claimed": false, 00:11:30.658 "zoned": false, 00:11:30.658 "supported_io_types": { 00:11:30.658 "read": true, 00:11:30.658 "write": true, 00:11:30.658 "unmap": true, 00:11:30.658 "write_zeroes": true, 00:11:30.658 "flush": true, 00:11:30.658 "reset": true, 00:11:30.658 "compare": true, 00:11:30.658 "compare_and_write": true, 00:11:30.658 "abort": true, 00:11:30.658 "nvme_admin": true, 00:11:30.658 "nvme_io": true 00:11:30.658 }, 00:11:30.658 "memory_domains": [ 00:11:30.658 { 00:11:30.658 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:30.658 "dma_device_type": 0 00:11:30.658 } 00:11:30.658 ], 00:11:30.658 "driver_specific": { 00:11:30.658 "nvme": [ 00:11:30.658 { 00:11:30.658 "trid": { 00:11:30.658 "trtype": "RDMA", 00:11:30.658 "adrfam": "IPv4", 00:11:30.658 "traddr": "192.168.100.8", 00:11:30.658 "trsvcid": "4420", 00:11:30.658 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:30.658 }, 00:11:30.658 "ctrlr_data": { 00:11:30.658 "cntlid": 1, 00:11:30.658 "vendor_id": "0x8086", 00:11:30.658 "model_number": "SPDK bdev Controller", 00:11:30.658 "serial_number": "SPDK0", 00:11:30.658 "firmware_revision": "24.05", 00:11:30.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:30.658 "oacs": { 00:11:30.658 "security": 0, 00:11:30.658 "format": 0, 00:11:30.658 "firmware": 0, 00:11:30.658 "ns_manage": 0 00:11:30.658 }, 00:11:30.658 "multi_ctrlr": true, 00:11:30.658 "ana_reporting": false 00:11:30.658 }, 00:11:30.658 "vs": { 00:11:30.658 "nvme_version": "1.3" 00:11:30.658 }, 00:11:30.658 "ns_data": { 00:11:30.658 "id": 1, 00:11:30.658 "can_share": true 00:11:30.658 } 00:11:30.658 } 00:11:30.658 ], 00:11:30.658 "mp_policy": "active_passive" 00:11:30.658 } 00:11:30.658 } 00:11:30.658 ] 00:11:30.658 13:40:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1101990 00:11:30.658 13:40:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:30.658 13:40:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:30.914 Running I/O for 10 seconds... 00:11:31.846 Latency(us) 00:11:31.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.846 Nvme0n1 : 1.00 21318.00 83.27 0.00 0.00 0.00 0.00 0.00 00:11:31.846 =================================================================================================================== 00:11:31.846 Total : 21318.00 83.27 0.00 0.00 0.00 0.00 0.00 00:11:31.846 00:11:32.781 13:40:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:33.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.039 Nvme0n1 : 2.00 21762.00 85.01 0.00 0.00 0.00 0.00 0.00 00:11:33.039 =================================================================================================================== 00:11:33.039 Total : 21762.00 85.01 0.00 0.00 0.00 0.00 0.00 00:11:33.039 00:11:33.039 true 00:11:33.039 13:40:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:33.039 13:40:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:33.296 13:40:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:33.296 13:40:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:33.296 13:40:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 1101990 00:11:33.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.861 Nvme0n1 : 3.00 21993.67 85.91 0.00 0.00 0.00 0.00 0.00 00:11:33.861 =================================================================================================================== 00:11:33.861 Total : 21993.67 85.91 0.00 0.00 0.00 0.00 0.00 00:11:33.861 00:11:35.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.270 Nvme0n1 : 4.00 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:11:35.270 =================================================================================================================== 00:11:35.270 Total : 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:11:35.270 00:11:36.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.204 Nvme0n1 : 5.00 22238.80 86.87 0.00 0.00 0.00 0.00 0.00 00:11:36.204 =================================================================================================================== 00:11:36.204 Total : 22238.80 86.87 0.00 0.00 0.00 0.00 0.00 00:11:36.204 00:11:37.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.138 Nvme0n1 : 6.00 22319.50 87.19 0.00 0.00 0.00 0.00 0.00 00:11:37.138 =================================================================================================================== 00:11:37.138 Total : 22319.50 87.19 0.00 0.00 0.00 0.00 0.00 00:11:37.138 00:11:38.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.072 Nvme0n1 : 7.00 22390.29 87.46 0.00 0.00 0.00 0.00 0.00 00:11:38.072 =================================================================================================================== 00:11:38.072 Total : 22390.29 87.46 0.00 0.00 0.00 0.00 0.00 00:11:38.072 00:11:39.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.006 Nvme0n1 : 8.00 22447.50 87.69 0.00 0.00 0.00 0.00 0.00 00:11:39.006 =================================================================================================================== 00:11:39.006 Total : 22447.50 87.69 0.00 0.00 0.00 0.00 0.00 00:11:39.006 00:11:39.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.939 Nvme0n1 : 9.00 22486.11 87.84 0.00 0.00 0.00 0.00 0.00 00:11:39.939 =================================================================================================================== 00:11:39.939 Total : 22486.11 87.84 0.00 0.00 0.00 0.00 0.00 00:11:39.939 00:11:40.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.882 Nvme0n1 : 10.00 22512.60 87.94 0.00 0.00 0.00 0.00 0.00 00:11:40.882 =================================================================================================================== 00:11:40.882 Total : 22512.60 87.94 0.00 0.00 0.00 0.00 0.00 00:11:40.882 00:11:40.882 00:11:40.882 Latency(us) 00:11:40.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.882 Nvme0n1 : 10.00 22513.51 87.94 0.00 0.00 5680.68 4344.79 18252.99 00:11:40.882 =================================================================================================================== 00:11:40.882 Total : 22513.51 87.94 0.00 0.00 5680.68 4344.79 18252.99 00:11:40.882 0 00:11:41.140 13:40:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1101855 00:11:41.140 13:40:43 -- common/autotest_common.sh@936 -- # '[' -z 1101855 ']' 00:11:41.140 13:40:43 -- common/autotest_common.sh@940 -- # kill -0 1101855 00:11:41.140 13:40:43 -- common/autotest_common.sh@941 -- # uname 00:11:41.140 13:40:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.140 13:40:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1101855 00:11:41.140 13:40:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:41.140 13:40:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:41.140 13:40:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1101855' 00:11:41.140 killing process with pid 1101855 00:11:41.140 13:40:43 -- common/autotest_common.sh@955 -- # kill 1101855 00:11:41.140 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.140 00:11:41.140 Latency(us) 00:11:41.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.140 =================================================================================================================== 00:11:41.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.140 13:40:43 -- common/autotest_common.sh@960 -- # wait 1101855 00:11:41.397 13:40:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:41.654 13:40:44 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:41.654 13:40:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1099093 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@74 -- # wait 1099093 00:11:41.912 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1099093 Killed "${NVMF_APP[@]}" "$@" 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@74 -- # true 00:11:41.912 13:40:44 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:11:41.912 13:40:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:41.912 13:40:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:41.912 13:40:44 -- common/autotest_common.sh@10 -- # set +x 00:11:41.912 13:40:44 -- nvmf/common.sh@470 -- # nvmfpid=1103203 00:11:41.912 13:40:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:41.912 13:40:44 -- nvmf/common.sh@471 -- # waitforlisten 1103203 00:11:41.912 13:40:44 -- common/autotest_common.sh@817 -- # '[' -z 1103203 ']' 00:11:41.912 13:40:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.912 13:40:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:41.912 13:40:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.912 13:40:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:41.912 13:40:44 -- common/autotest_common.sh@10 -- # set +x 00:11:42.170 [2024-04-18 13:40:44.724653] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:42.170 [2024-04-18 13:40:44.724743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.170 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.170 [2024-04-18 13:40:44.800189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.170 [2024-04-18 13:40:44.919283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.170 [2024-04-18 13:40:44.919351] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.170 [2024-04-18 13:40:44.919367] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.170 [2024-04-18 13:40:44.919381] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.170 [2024-04-18 13:40:44.919392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.170 [2024-04-18 13:40:44.919424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.102 13:40:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:43.102 13:40:45 -- common/autotest_common.sh@850 -- # return 0 00:11:43.102 13:40:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:43.102 13:40:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:43.102 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:11:43.102 13:40:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.102 13:40:45 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.360 [2024-04-18 13:40:46.003437] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:43.361 [2024-04-18 13:40:46.003591] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:43.361 [2024-04-18 13:40:46.003648] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:43.361 13:40:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:11:43.361 13:40:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:43.361 13:40:46 -- common/autotest_common.sh@885 -- # local bdev_name=7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:43.361 13:40:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:43.361 13:40:46 -- common/autotest_common.sh@887 -- # local i 00:11:43.361 13:40:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:43.361 13:40:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:43.361 13:40:46 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.618 13:40:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e0bb295-2ace-486e-b0c1-0eaf27a40512 -t 2000 00:11:43.876 [ 00:11:43.876 { 00:11:43.876 "name": "7e0bb295-2ace-486e-b0c1-0eaf27a40512", 00:11:43.876 "aliases": [ 00:11:43.876 "lvs/lvol" 00:11:43.876 ], 00:11:43.876 "product_name": "Logical Volume", 00:11:43.876 "block_size": 4096, 00:11:43.876 "num_blocks": 38912, 00:11:43.876 "uuid": "7e0bb295-2ace-486e-b0c1-0eaf27a40512", 00:11:43.876 "assigned_rate_limits": { 00:11:43.876 "rw_ios_per_sec": 0, 00:11:43.876 "rw_mbytes_per_sec": 0, 00:11:43.876 "r_mbytes_per_sec": 0, 00:11:43.876 "w_mbytes_per_sec": 0 00:11:43.876 }, 00:11:43.876 "claimed": false, 00:11:43.876 "zoned": false, 00:11:43.876 "supported_io_types": { 00:11:43.876 "read": true, 00:11:43.876 "write": true, 00:11:43.876 "unmap": true, 00:11:43.876 "write_zeroes": true, 00:11:43.876 "flush": false, 00:11:43.876 "reset": true, 00:11:43.876 "compare": false, 00:11:43.876 "compare_and_write": false, 00:11:43.876 "abort": false, 00:11:43.876 "nvme_admin": false, 00:11:43.876 "nvme_io": false 00:11:43.876 }, 00:11:43.876 "driver_specific": { 00:11:43.876 "lvol": { 00:11:43.876 "lvol_store_uuid": "536d76d6-e877-4db0-afa6-61219d472f97", 00:11:43.876 "base_bdev": "aio_bdev", 00:11:43.876 "thin_provision": false, 00:11:43.876 "snapshot": false, 00:11:43.876 "clone": false, 00:11:43.876 "esnap_clone": false 00:11:43.876 } 00:11:43.876 } 00:11:43.876 } 00:11:43.876 ] 00:11:43.876 13:40:46 -- common/autotest_common.sh@893 -- # return 0 00:11:43.876 13:40:46 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:43.876 13:40:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:11:44.133 13:40:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:11:44.133 13:40:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:44.133 13:40:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:11:44.391 13:40:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:11:44.391 13:40:47 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:44.648 [2024-04-18 13:40:47.428694] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:44.906 13:40:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:44.906 13:40:47 -- common/autotest_common.sh@638 -- # local es=0 00:11:44.906 13:40:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:44.906 13:40:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:44.906 13:40:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.906 13:40:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:44.906 13:40:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.906 13:40:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:44.906 13:40:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.906 13:40:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:44.906 13:40:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:44.906 13:40:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:45.164 request: 00:11:45.164 { 00:11:45.164 "uuid": "536d76d6-e877-4db0-afa6-61219d472f97", 00:11:45.164 "method": "bdev_lvol_get_lvstores", 00:11:45.164 "req_id": 1 00:11:45.164 } 00:11:45.164 Got JSON-RPC error response 00:11:45.164 response: 00:11:45.164 { 00:11:45.164 "code": -19, 00:11:45.164 "message": "No such device" 00:11:45.164 } 00:11:45.164 13:40:47 -- common/autotest_common.sh@641 -- # es=1 00:11:45.164 13:40:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:45.164 13:40:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:45.164 13:40:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:45.164 13:40:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:45.728 aio_bdev 00:11:45.728 13:40:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:45.728 13:40:48 -- common/autotest_common.sh@885 -- # local bdev_name=7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:45.728 13:40:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:45.728 13:40:48 -- common/autotest_common.sh@887 -- # local i 00:11:45.728 13:40:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:45.728 13:40:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:45.728 13:40:48 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:45.984 13:40:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e0bb295-2ace-486e-b0c1-0eaf27a40512 -t 2000 00:11:46.242 [ 00:11:46.242 { 00:11:46.242 "name": "7e0bb295-2ace-486e-b0c1-0eaf27a40512", 00:11:46.242 "aliases": [ 00:11:46.242 "lvs/lvol" 00:11:46.242 ], 00:11:46.242 "product_name": "Logical Volume", 00:11:46.242 "block_size": 4096, 00:11:46.242 "num_blocks": 38912, 00:11:46.242 "uuid": "7e0bb295-2ace-486e-b0c1-0eaf27a40512", 00:11:46.242 "assigned_rate_limits": { 00:11:46.242 "rw_ios_per_sec": 0, 00:11:46.242 "rw_mbytes_per_sec": 0, 00:11:46.242 "r_mbytes_per_sec": 0, 00:11:46.242 "w_mbytes_per_sec": 0 00:11:46.242 }, 00:11:46.242 "claimed": false, 00:11:46.242 "zoned": false, 00:11:46.242 "supported_io_types": { 00:11:46.242 "read": true, 00:11:46.242 "write": true, 00:11:46.242 "unmap": true, 00:11:46.242 "write_zeroes": true, 00:11:46.242 "flush": false, 00:11:46.242 "reset": true, 00:11:46.242 "compare": false, 00:11:46.242 "compare_and_write": false, 00:11:46.242 "abort": false, 00:11:46.242 "nvme_admin": false, 00:11:46.242 "nvme_io": false 00:11:46.242 }, 00:11:46.242 "driver_specific": { 00:11:46.242 "lvol": { 00:11:46.242 "lvol_store_uuid": "536d76d6-e877-4db0-afa6-61219d472f97", 00:11:46.242 "base_bdev": "aio_bdev", 00:11:46.242 "thin_provision": false, 00:11:46.242 "snapshot": false, 00:11:46.242 "clone": false, 00:11:46.242 "esnap_clone": false 00:11:46.242 } 00:11:46.242 } 00:11:46.242 } 00:11:46.242 ] 00:11:46.500 13:40:49 -- common/autotest_common.sh@893 -- # return 0 00:11:46.500 13:40:49 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:46.500 13:40:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:46.757 13:40:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:46.757 13:40:49 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:46.757 13:40:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:47.015 13:40:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:47.015 13:40:49 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e0bb295-2ace-486e-b0c1-0eaf27a40512 00:11:47.273 13:40:49 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 536d76d6-e877-4db0-afa6-61219d472f97 00:11:47.531 13:40:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:47.789 13:40:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.790 00:11:47.790 real 0m21.266s 00:11:47.790 user 0m53.035s 00:11:47.790 sys 0m4.277s 00:11:47.790 13:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:47.790 13:40:50 -- common/autotest_common.sh@10 -- # set +x 00:11:47.790 ************************************ 00:11:47.790 END TEST lvs_grow_dirty 00:11:47.790 ************************************ 00:11:47.790 13:40:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:47.790 13:40:50 -- common/autotest_common.sh@794 -- # type=--id 00:11:47.790 13:40:50 -- common/autotest_common.sh@795 -- # id=0 00:11:47.790 13:40:50 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:47.790 13:40:50 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:48.062 13:40:50 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:48.062 13:40:50 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:48.062 13:40:50 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:48.062 13:40:50 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:48.062 nvmf_trace.0 00:11:48.062 13:40:50 -- common/autotest_common.sh@809 -- # return 0 00:11:48.062 13:40:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:48.062 13:40:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:48.062 13:40:50 -- nvmf/common.sh@117 -- # sync 00:11:48.062 13:40:50 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:48.062 13:40:50 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:48.062 13:40:50 -- nvmf/common.sh@120 -- # set +e 00:11:48.062 13:40:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.062 13:40:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:48.062 rmmod nvme_rdma 00:11:48.062 rmmod nvme_fabrics 00:11:48.062 13:40:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.062 13:40:50 -- nvmf/common.sh@124 -- # set -e 00:11:48.062 13:40:50 -- nvmf/common.sh@125 -- # return 0 00:11:48.062 13:40:50 -- nvmf/common.sh@478 -- # '[' -n 1103203 ']' 00:11:48.062 13:40:50 -- nvmf/common.sh@479 -- # killprocess 1103203 00:11:48.062 13:40:50 -- common/autotest_common.sh@936 -- # '[' -z 1103203 ']' 00:11:48.062 13:40:50 -- common/autotest_common.sh@940 -- # kill -0 1103203 00:11:48.062 13:40:50 -- common/autotest_common.sh@941 -- # uname 00:11:48.062 13:40:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.062 13:40:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1103203 00:11:48.062 13:40:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:48.062 13:40:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:48.062 13:40:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1103203' 00:11:48.062 killing process with pid 1103203 00:11:48.062 13:40:50 -- common/autotest_common.sh@955 -- # kill 1103203 00:11:48.062 13:40:50 -- common/autotest_common.sh@960 -- # wait 1103203 00:11:48.322 13:40:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:48.322 13:40:50 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:48.322 00:11:48.322 real 0m44.537s 00:11:48.322 user 1m19.968s 00:11:48.322 sys 0m8.331s 00:11:48.322 13:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.322 13:40:50 -- common/autotest_common.sh@10 -- # set +x 00:11:48.322 ************************************ 00:11:48.322 END TEST nvmf_lvs_grow 00:11:48.322 ************************************ 00:11:48.322 13:40:51 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:48.322 13:40:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.322 13:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.322 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:11:48.580 ************************************ 00:11:48.580 START TEST nvmf_bdev_io_wait 00:11:48.580 ************************************ 00:11:48.580 13:40:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:48.580 * Looking for test storage... 00:11:48.580 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:48.580 13:40:51 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.580 13:40:51 -- nvmf/common.sh@7 -- # uname -s 00:11:48.580 13:40:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.580 13:40:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.580 13:40:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.580 13:40:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.580 13:40:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.580 13:40:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.580 13:40:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.580 13:40:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.580 13:40:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.580 13:40:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.580 13:40:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:11:48.580 13:40:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:11:48.580 13:40:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.580 13:40:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.580 13:40:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.580 13:40:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.580 13:40:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:48.580 13:40:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.580 13:40:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.580 13:40:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.580 13:40:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.580 13:40:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.580 13:40:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.580 13:40:51 -- paths/export.sh@5 -- # export PATH 00:11:48.580 13:40:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.580 13:40:51 -- nvmf/common.sh@47 -- # : 0 00:11:48.580 13:40:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.580 13:40:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.580 13:40:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.580 13:40:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.580 13:40:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.580 13:40:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.581 13:40:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.581 13:40:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.581 13:40:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.581 13:40:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.581 13:40:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:48.581 13:40:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:48.581 13:40:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.581 13:40:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:48.581 13:40:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:48.581 13:40:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:48.581 13:40:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.581 13:40:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.581 13:40:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.581 13:40:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:48.581 13:40:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:48.581 13:40:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.581 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:11:51.861 13:40:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:51.861 13:40:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.861 13:40:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.861 13:40:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.861 13:40:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.861 13:40:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.861 13:40:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.861 13:40:53 -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.861 13:40:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.861 13:40:53 -- nvmf/common.sh@296 -- # e810=() 00:11:51.861 13:40:53 -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.861 13:40:53 -- nvmf/common.sh@297 -- # x722=() 00:11:51.861 13:40:53 -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.861 13:40:53 -- nvmf/common.sh@298 -- # mlx=() 00:11:51.861 13:40:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.861 13:40:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.861 13:40:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.861 13:40:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.861 13:40:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:11:51.861 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:11:51.861 13:40:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.861 13:40:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.861 13:40:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:11:51.861 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:11:51.861 13:40:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.861 13:40:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.861 13:40:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.861 13:40:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.861 13:40:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:51.861 13:40:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.861 13:40:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:11:51.861 Found net devices under 0000:81:00.0: mlx_0_0 00:11:51.861 13:40:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.861 13:40:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.861 13:40:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:51.861 13:40:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.861 13:40:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:11:51.861 Found net devices under 0000:81:00.1: mlx_0_1 00:11:51.861 13:40:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.861 13:40:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:51.861 13:40:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:51.861 13:40:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:51.861 13:40:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:51.862 13:40:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:51.862 13:40:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:51.862 13:40:53 -- nvmf/common.sh@58 -- # uname 00:11:51.862 13:40:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:51.862 13:40:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:51.862 13:40:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:51.862 13:40:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:51.862 13:40:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:51.862 13:40:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:51.862 13:40:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:51.862 13:40:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:51.862 13:40:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:51.862 13:40:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:51.862 13:40:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:51.862 13:40:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.862 13:40:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.862 13:40:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.862 13:40:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.862 13:40:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.862 13:40:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.862 13:40:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.862 13:40:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.862 13:40:53 -- nvmf/common.sh@105 -- # continue 2 00:11:51.862 13:40:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.862 13:40:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.862 13:40:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.862 13:40:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.862 13:40:53 -- nvmf/common.sh@105 -- # continue 2 00:11:51.862 13:40:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.862 13:40:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:51.862 13:40:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.862 13:40:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.862 13:40:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.862 13:40:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.862 13:40:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:51.862 13:40:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:51.862 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.862 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:11:51.862 altname enp129s0f0np0 00:11:51.862 inet 192.168.100.8/24 scope global mlx_0_0 00:11:51.862 valid_lft forever preferred_lft forever 00:11:51.862 13:40:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.862 13:40:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.862 13:40:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:51.862 13:40:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:51.862 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.862 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:11:51.862 altname enp129s0f1np1 00:11:51.862 inet 192.168.100.9/24 scope global mlx_0_1 00:11:51.862 valid_lft forever preferred_lft forever 00:11:51.862 13:40:54 -- nvmf/common.sh@411 -- # return 0 00:11:51.862 13:40:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:51.862 13:40:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:51.862 13:40:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:51.862 13:40:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:51.862 13:40:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.862 13:40:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.862 13:40:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.862 13:40:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.862 13:40:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.862 13:40:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.862 13:40:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.862 13:40:54 -- nvmf/common.sh@105 -- # continue 2 00:11:51.862 13:40:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.862 13:40:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.862 13:40:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.862 13:40:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@105 -- # continue 2 00:11:51.862 13:40:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.862 13:40:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:51.862 13:40:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.862 13:40:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.862 13:40:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.862 13:40:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.862 13:40:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:51.862 192.168.100.9' 00:11:51.862 13:40:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:51.862 192.168.100.9' 00:11:51.862 13:40:54 -- nvmf/common.sh@446 -- # head -n 1 00:11:51.862 13:40:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:51.862 13:40:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:51.862 192.168.100.9' 00:11:51.862 13:40:54 -- nvmf/common.sh@447 -- # tail -n +2 00:11:51.862 13:40:54 -- nvmf/common.sh@447 -- # head -n 1 00:11:51.862 13:40:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:51.862 13:40:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:51.862 13:40:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:51.862 13:40:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:51.862 13:40:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:51.862 13:40:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:51.862 13:40:54 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:51.862 13:40:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:51.862 13:40:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:51.862 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.862 13:40:54 -- nvmf/common.sh@470 -- # nvmfpid=1106127 00:11:51.862 13:40:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:51.862 13:40:54 -- nvmf/common.sh@471 -- # waitforlisten 1106127 00:11:51.862 13:40:54 -- common/autotest_common.sh@817 -- # '[' -z 1106127 ']' 00:11:51.862 13:40:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.862 13:40:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:51.862 13:40:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.862 13:40:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:51.862 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.862 [2024-04-18 13:40:54.122662] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:51.862 [2024-04-18 13:40:54.122754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.862 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.862 [2024-04-18 13:40:54.202270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.862 [2024-04-18 13:40:54.328710] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.862 [2024-04-18 13:40:54.328775] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.862 [2024-04-18 13:40:54.328798] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.862 [2024-04-18 13:40:54.328812] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.862 [2024-04-18 13:40:54.328824] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.862 [2024-04-18 13:40:54.328909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.862 [2024-04-18 13:40:54.328969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.862 [2024-04-18 13:40:54.329031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.862 [2024-04-18 13:40:54.329034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.862 13:40:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.862 13:40:54 -- common/autotest_common.sh@850 -- # return 0 00:11:51.862 13:40:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:51.862 13:40:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:51.862 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.862 13:40:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.862 13:40:54 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:51.862 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.862 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.862 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.862 13:40:54 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:51.862 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.862 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.862 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.862 13:40:54 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:51.863 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.863 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.863 [2024-04-18 13:40:54.508099] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa63140/0xa67630) succeed. 00:11:51.863 [2024-04-18 13:40:54.520094] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa64730/0xaa8cc0) succeed. 00:11:52.121 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.121 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.121 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 Malloc0 00:11:52.121 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.121 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.121 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.121 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.121 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:52.121 13:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.121 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 [2024-04-18 13:40:54.748541] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:52.121 13:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1106153 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@30 -- # READ_PID=1106155 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # config=() 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # local subsystem config 00:11:52.121 13:40:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:52.121 13:40:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:52.121 { 00:11:52.121 "params": { 00:11:52.121 "name": "Nvme$subsystem", 00:11:52.121 "trtype": "$TEST_TRANSPORT", 00:11:52.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.121 "adrfam": "ipv4", 00:11:52.121 "trsvcid": "$NVMF_PORT", 00:11:52.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.121 "hdgst": ${hdgst:-false}, 00:11:52.121 "ddgst": ${ddgst:-false} 00:11:52.121 }, 00:11:52.121 "method": "bdev_nvme_attach_controller" 00:11:52.121 } 00:11:52.121 EOF 00:11:52.121 )") 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1106157 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # config=() 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # local subsystem config 00:11:52.121 13:40:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:52.121 13:40:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:52.121 { 00:11:52.121 "params": { 00:11:52.121 "name": "Nvme$subsystem", 00:11:52.121 "trtype": "$TEST_TRANSPORT", 00:11:52.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.121 "adrfam": "ipv4", 00:11:52.121 "trsvcid": "$NVMF_PORT", 00:11:52.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.121 "hdgst": ${hdgst:-false}, 00:11:52.121 "ddgst": ${ddgst:-false} 00:11:52.121 }, 00:11:52.121 "method": "bdev_nvme_attach_controller" 00:11:52.121 } 00:11:52.121 EOF 00:11:52.121 )") 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1106160 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:52.121 13:40:54 -- target/bdev_io_wait.sh@35 -- # sync 00:11:52.121 13:40:54 -- nvmf/common.sh@543 -- # cat 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # config=() 00:11:52.121 13:40:54 -- nvmf/common.sh@521 -- # local subsystem config 00:11:52.121 13:40:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:52.121 13:40:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:52.121 { 00:11:52.121 "params": { 00:11:52.121 "name": "Nvme$subsystem", 00:11:52.121 "trtype": "$TEST_TRANSPORT", 00:11:52.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.121 "adrfam": "ipv4", 00:11:52.121 "trsvcid": "$NVMF_PORT", 00:11:52.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.121 "hdgst": ${hdgst:-false}, 00:11:52.121 "ddgst": ${ddgst:-false} 00:11:52.121 }, 00:11:52.121 "method": "bdev_nvme_attach_controller" 00:11:52.121 } 00:11:52.121 EOF 00:11:52.121 )") 00:11:52.122 13:40:54 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:52.122 13:40:54 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:52.122 13:40:54 -- nvmf/common.sh@521 -- # config=() 00:11:52.122 13:40:54 -- nvmf/common.sh@543 -- # cat 00:11:52.122 13:40:54 -- nvmf/common.sh@521 -- # local subsystem config 00:11:52.122 13:40:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:52.122 13:40:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:52.122 { 00:11:52.122 "params": { 00:11:52.122 "name": "Nvme$subsystem", 00:11:52.122 "trtype": "$TEST_TRANSPORT", 00:11:52.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.122 "adrfam": "ipv4", 00:11:52.122 "trsvcid": "$NVMF_PORT", 00:11:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.122 "hdgst": ${hdgst:-false}, 00:11:52.122 "ddgst": ${ddgst:-false} 00:11:52.122 }, 00:11:52.122 "method": "bdev_nvme_attach_controller" 00:11:52.122 } 00:11:52.122 EOF 00:11:52.122 )") 00:11:52.122 13:40:54 -- nvmf/common.sh@543 -- # cat 00:11:52.122 13:40:54 -- target/bdev_io_wait.sh@37 -- # wait 1106153 00:11:52.122 13:40:54 -- nvmf/common.sh@543 -- # cat 00:11:52.122 13:40:54 -- nvmf/common.sh@545 -- # jq . 00:11:52.122 13:40:54 -- nvmf/common.sh@545 -- # jq . 00:11:52.122 13:40:54 -- nvmf/common.sh@545 -- # jq . 00:11:52.122 13:40:54 -- nvmf/common.sh@546 -- # IFS=, 00:11:52.122 13:40:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:52.122 "params": { 00:11:52.122 "name": "Nvme1", 00:11:52.122 "trtype": "rdma", 00:11:52.122 "traddr": "192.168.100.8", 00:11:52.122 "adrfam": "ipv4", 00:11:52.122 "trsvcid": "4420", 00:11:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.122 "hdgst": false, 00:11:52.122 "ddgst": false 00:11:52.122 }, 00:11:52.122 "method": "bdev_nvme_attach_controller" 00:11:52.122 }' 00:11:52.122 13:40:54 -- nvmf/common.sh@545 -- # jq . 00:11:52.122 13:40:54 -- nvmf/common.sh@546 -- # IFS=, 00:11:52.122 13:40:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:52.122 "params": { 00:11:52.122 "name": "Nvme1", 00:11:52.122 "trtype": "rdma", 00:11:52.122 "traddr": "192.168.100.8", 00:11:52.122 "adrfam": "ipv4", 00:11:52.122 "trsvcid": "4420", 00:11:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.122 "hdgst": false, 00:11:52.122 "ddgst": false 00:11:52.122 }, 00:11:52.122 "method": "bdev_nvme_attach_controller" 00:11:52.122 }' 00:11:52.122 13:40:54 -- nvmf/common.sh@546 -- # IFS=, 00:11:52.122 13:40:54 -- nvmf/common.sh@546 -- # IFS=, 00:11:52.122 13:40:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:52.122 "params": { 00:11:52.122 "name": "Nvme1", 00:11:52.122 "trtype": "rdma", 00:11:52.122 "traddr": "192.168.100.8", 00:11:52.122 "adrfam": "ipv4", 00:11:52.122 "trsvcid": "4420", 00:11:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.122 "hdgst": false, 00:11:52.122 "ddgst": false 00:11:52.122 }, 00:11:52.122 "method": "bdev_nvme_attach_controller" 00:11:52.122 }' 00:11:52.122 13:40:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:52.122 "params": { 00:11:52.122 "name": "Nvme1", 00:11:52.122 "trtype": "rdma", 00:11:52.122 "traddr": "192.168.100.8", 00:11:52.122 "adrfam": "ipv4", 00:11:52.122 "trsvcid": "4420", 00:11:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.122 "hdgst": false, 00:11:52.122 "ddgst": false 00:11:52.122 }, 00:11:52.122 "method": "bdev_nvme_attach_controller" 00:11:52.122 }' 00:11:52.122 [2024-04-18 13:40:54.796363] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:52.122 [2024-04-18 13:40:54.796363] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:52.122 [2024-04-18 13:40:54.796363] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:52.122 [2024-04-18 13:40:54.796460] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 13:40:54.796461] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 13:40:54.796460] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:52.122 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:52.122 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:52.122 [2024-04-18 13:40:54.796782] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:52.122 [2024-04-18 13:40:54.796853] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:52.122 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.380 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.380 [2024-04-18 13:40:54.998120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.380 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.380 [2024-04-18 13:40:55.105264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:52.380 [2024-04-18 13:40:55.111816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.638 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.638 [2024-04-18 13:40:55.217039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:52.638 [2024-04-18 13:40:55.227081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.638 [2024-04-18 13:40:55.333774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:52.638 [2024-04-18 13:40:55.340687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.896 [2024-04-18 13:40:55.449890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:52.896 Running I/O for 1 seconds... 00:11:52.896 Running I/O for 1 seconds... 00:11:52.896 Running I/O for 1 seconds... 00:11:52.896 Running I/O for 1 seconds... 00:11:53.830 00:11:53.830 Latency(us) 00:11:53.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.830 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:53.830 Nvme1n1 : 1.01 13744.60 53.69 0.00 0.00 9277.47 6407.96 19806.44 00:11:53.830 =================================================================================================================== 00:11:53.830 Total : 13744.60 53.69 0.00 0.00 9277.47 6407.96 19806.44 00:11:53.830 00:11:53.830 Latency(us) 00:11:53.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.830 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:53.830 Nvme1n1 : 1.01 13230.52 51.68 0.00 0.00 9634.94 6941.96 20388.98 00:11:53.830 =================================================================================================================== 00:11:53.830 Total : 13230.52 51.68 0.00 0.00 9634.94 6941.96 20388.98 00:11:53.830 00:11:53.830 Latency(us) 00:11:53.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.830 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:53.830 Nvme1n1 : 1.00 15079.97 58.91 0.00 0.00 8462.04 4660.34 25243.50 00:11:53.830 =================================================================================================================== 00:11:53.830 Total : 15079.97 58.91 0.00 0.00 8462.04 4660.34 25243.50 00:11:53.830 00:11:53.830 Latency(us) 00:11:53.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.830 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:53.830 Nvme1n1 : 1.00 185745.40 725.57 0.00 0.00 686.17 267.00 2415.12 00:11:53.830 =================================================================================================================== 00:11:53.830 Total : 185745.40 725.57 0.00 0.00 686.17 267.00 2415.12 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@38 -- # wait 1106155 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@39 -- # wait 1106157 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@40 -- # wait 1106160 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.395 13:40:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.395 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.395 13:40:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:54.395 13:40:57 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:54.395 13:40:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:54.395 13:40:57 -- nvmf/common.sh@117 -- # sync 00:11:54.395 13:40:57 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:54.395 13:40:57 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:54.395 13:40:57 -- nvmf/common.sh@120 -- # set +e 00:11:54.395 13:40:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.395 13:40:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:54.395 rmmod nvme_rdma 00:11:54.395 rmmod nvme_fabrics 00:11:54.395 13:40:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.395 13:40:57 -- nvmf/common.sh@124 -- # set -e 00:11:54.395 13:40:57 -- nvmf/common.sh@125 -- # return 0 00:11:54.395 13:40:57 -- nvmf/common.sh@478 -- # '[' -n 1106127 ']' 00:11:54.395 13:40:57 -- nvmf/common.sh@479 -- # killprocess 1106127 00:11:54.395 13:40:57 -- common/autotest_common.sh@936 -- # '[' -z 1106127 ']' 00:11:54.395 13:40:57 -- common/autotest_common.sh@940 -- # kill -0 1106127 00:11:54.395 13:40:57 -- common/autotest_common.sh@941 -- # uname 00:11:54.395 13:40:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.395 13:40:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1106127 00:11:54.395 13:40:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.395 13:40:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.395 13:40:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1106127' 00:11:54.395 killing process with pid 1106127 00:11:54.395 13:40:57 -- common/autotest_common.sh@955 -- # kill 1106127 00:11:54.395 13:40:57 -- common/autotest_common.sh@960 -- # wait 1106127 00:11:54.961 13:40:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:54.961 13:40:57 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:54.961 00:11:54.961 real 0m6.415s 00:11:54.961 user 0m19.672s 00:11:54.961 sys 0m3.495s 00:11:54.961 13:40:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.961 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.961 ************************************ 00:11:54.961 END TEST nvmf_bdev_io_wait 00:11:54.961 ************************************ 00:11:54.961 13:40:57 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:54.961 13:40:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:54.961 13:40:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.961 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.961 ************************************ 00:11:54.961 START TEST nvmf_queue_depth 00:11:54.961 ************************************ 00:11:54.961 13:40:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:54.961 * Looking for test storage... 00:11:54.961 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.961 13:40:57 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.961 13:40:57 -- nvmf/common.sh@7 -- # uname -s 00:11:54.961 13:40:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.961 13:40:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.961 13:40:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.961 13:40:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.961 13:40:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.961 13:40:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.961 13:40:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.961 13:40:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.961 13:40:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.961 13:40:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.219 13:40:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:11:55.219 13:40:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:11:55.219 13:40:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.219 13:40:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.219 13:40:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.219 13:40:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.219 13:40:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.219 13:40:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.219 13:40:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.219 13:40:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.219 13:40:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.219 13:40:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.219 13:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.219 13:40:57 -- paths/export.sh@5 -- # export PATH 00:11:55.219 13:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.219 13:40:57 -- nvmf/common.sh@47 -- # : 0 00:11:55.219 13:40:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.219 13:40:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.219 13:40:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.219 13:40:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.219 13:40:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.219 13:40:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.219 13:40:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.219 13:40:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.219 13:40:57 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:55.219 13:40:57 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:55.219 13:40:57 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:55.219 13:40:57 -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:55.219 13:40:57 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:55.219 13:40:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.219 13:40:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:55.219 13:40:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:55.219 13:40:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:55.219 13:40:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.219 13:40:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.219 13:40:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.219 13:40:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:55.219 13:40:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:55.219 13:40:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.219 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:11:57.746 13:41:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:57.746 13:41:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.746 13:41:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.746 13:41:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.746 13:41:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.746 13:41:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.746 13:41:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.746 13:41:00 -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.746 13:41:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.746 13:41:00 -- nvmf/common.sh@296 -- # e810=() 00:11:57.746 13:41:00 -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.746 13:41:00 -- nvmf/common.sh@297 -- # x722=() 00:11:57.746 13:41:00 -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.746 13:41:00 -- nvmf/common.sh@298 -- # mlx=() 00:11:57.746 13:41:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.746 13:41:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.746 13:41:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.747 13:41:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.747 13:41:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.747 13:41:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.747 13:41:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.747 13:41:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:11:57.747 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:11:57.747 13:41:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.747 13:41:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:11:57.747 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:11:57.747 13:41:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.747 13:41:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.747 13:41:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.747 13:41:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:11:57.747 Found net devices under 0000:81:00.0: mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.747 13:41:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.747 13:41:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:11:57.747 Found net devices under 0000:81:00.1: mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.747 13:41:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:57.747 13:41:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:57.747 13:41:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:57.747 13:41:00 -- nvmf/common.sh@58 -- # uname 00:11:57.747 13:41:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:57.747 13:41:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:57.747 13:41:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:57.747 13:41:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:57.747 13:41:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:57.747 13:41:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:57.747 13:41:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:57.747 13:41:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:57.747 13:41:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:57.747 13:41:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:57.747 13:41:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:57.747 13:41:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.747 13:41:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.747 13:41:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.747 13:41:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.747 13:41:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@105 -- # continue 2 00:11:57.747 13:41:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@105 -- # continue 2 00:11:57.747 13:41:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.747 13:41:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.747 13:41:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:57.747 13:41:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:57.747 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.747 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:11:57.747 altname enp129s0f0np0 00:11:57.747 inet 192.168.100.8/24 scope global mlx_0_0 00:11:57.747 valid_lft forever preferred_lft forever 00:11:57.747 13:41:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.747 13:41:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.747 13:41:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:57.747 13:41:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:57.747 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.747 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:11:57.747 altname enp129s0f1np1 00:11:57.747 inet 192.168.100.9/24 scope global mlx_0_1 00:11:57.747 valid_lft forever preferred_lft forever 00:11:57.747 13:41:00 -- nvmf/common.sh@411 -- # return 0 00:11:57.747 13:41:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:57.747 13:41:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:57.747 13:41:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:57.747 13:41:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:57.747 13:41:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.747 13:41:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.747 13:41:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.747 13:41:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.747 13:41:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.747 13:41:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@105 -- # continue 2 00:11:57.747 13:41:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.747 13:41:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.747 13:41:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@105 -- # continue 2 00:11:57.747 13:41:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.747 13:41:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.747 13:41:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.747 13:41:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.747 13:41:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.747 13:41:00 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:57.747 192.168.100.9' 00:11:57.747 13:41:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:57.747 192.168.100.9' 00:11:57.747 13:41:00 -- nvmf/common.sh@446 -- # head -n 1 00:11:57.747 13:41:00 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:57.747 13:41:00 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:57.747 192.168.100.9' 00:11:57.747 13:41:00 -- nvmf/common.sh@447 -- # tail -n +2 00:11:57.747 13:41:00 -- nvmf/common.sh@447 -- # head -n 1 00:11:57.747 13:41:00 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:57.747 13:41:00 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:57.748 13:41:00 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:57.748 13:41:00 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:57.748 13:41:00 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:57.748 13:41:00 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:57.748 13:41:00 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:57.748 13:41:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:57.748 13:41:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:57.748 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:57.748 13:41:00 -- nvmf/common.sh@470 -- # nvmfpid=1108523 00:11:57.748 13:41:00 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:57.748 13:41:00 -- nvmf/common.sh@471 -- # waitforlisten 1108523 00:11:57.748 13:41:00 -- common/autotest_common.sh@817 -- # '[' -z 1108523 ']' 00:11:57.748 13:41:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.748 13:41:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.748 13:41:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.748 13:41:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.748 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:57.748 [2024-04-18 13:41:00.416715] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:57.748 [2024-04-18 13:41:00.416816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.748 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.748 [2024-04-18 13:41:00.508774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.006 [2024-04-18 13:41:00.644842] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.006 [2024-04-18 13:41:00.644900] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.006 [2024-04-18 13:41:00.644916] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.006 [2024-04-18 13:41:00.644929] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.006 [2024-04-18 13:41:00.644950] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.006 [2024-04-18 13:41:00.644994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.006 13:41:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.006 13:41:00 -- common/autotest_common.sh@850 -- # return 0 00:11:58.006 13:41:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:58.006 13:41:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:58.006 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.006 13:41:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.006 13:41:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:58.006 13:41:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.006 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 [2024-04-18 13:41:00.827243] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2305220/0x2309710) succeed. 00:11:58.263 [2024-04-18 13:41:00.839360] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2306720/0x234ada0) succeed. 00:11:58.263 13:41:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.263 13:41:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.263 13:41:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.263 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 Malloc0 00:11:58.263 13:41:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.263 13:41:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.263 13:41:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.263 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 13:41:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.263 13:41:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.263 13:41:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.263 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 13:41:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.263 13:41:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:58.263 13:41:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.263 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 [2024-04-18 13:41:00.939423] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:58.263 13:41:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.263 13:41:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=1108666 00:11:58.263 13:41:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:58.263 13:41:00 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:58.263 13:41:00 -- target/queue_depth.sh@33 -- # waitforlisten 1108666 /var/tmp/bdevperf.sock 00:11:58.263 13:41:00 -- common/autotest_common.sh@817 -- # '[' -z 1108666 ']' 00:11:58.263 13:41:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:58.263 13:41:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.263 13:41:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:58.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:58.263 13:41:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.263 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:11:58.263 [2024-04-18 13:41:00.994686] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:58.263 [2024-04-18 13:41:00.994786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108666 ] 00:11:58.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.521 [2024-04-18 13:41:01.083654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.521 [2024-04-18 13:41:01.206741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.778 13:41:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.778 13:41:01 -- common/autotest_common.sh@850 -- # return 0 00:11:58.778 13:41:01 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:58.778 13:41:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.778 13:41:01 -- common/autotest_common.sh@10 -- # set +x 00:11:58.778 NVMe0n1 00:11:58.778 13:41:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.778 13:41:01 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:59.036 Running I/O for 10 seconds... 00:12:08.999 00:12:08.999 Latency(us) 00:12:08.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.999 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:08.999 Verification LBA range: start 0x0 length 0x4000 00:12:08.999 NVMe0n1 : 10.07 12605.26 49.24 0.00 0.00 80961.38 33204.91 50486.99 00:12:08.999 =================================================================================================================== 00:12:08.999 Total : 12605.26 49.24 0.00 0.00 80961.38 33204.91 50486.99 00:12:08.999 0 00:12:08.999 13:41:11 -- target/queue_depth.sh@39 -- # killprocess 1108666 00:12:08.999 13:41:11 -- common/autotest_common.sh@936 -- # '[' -z 1108666 ']' 00:12:08.999 13:41:11 -- common/autotest_common.sh@940 -- # kill -0 1108666 00:12:08.999 13:41:11 -- common/autotest_common.sh@941 -- # uname 00:12:08.999 13:41:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.999 13:41:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1108666 00:12:09.257 13:41:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:09.257 13:41:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:09.257 13:41:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1108666' 00:12:09.257 killing process with pid 1108666 00:12:09.257 13:41:11 -- common/autotest_common.sh@955 -- # kill 1108666 00:12:09.257 Received shutdown signal, test time was about 10.000000 seconds 00:12:09.257 00:12:09.257 Latency(us) 00:12:09.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.257 =================================================================================================================== 00:12:09.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:09.257 13:41:11 -- common/autotest_common.sh@960 -- # wait 1108666 00:12:09.514 13:41:12 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:09.514 13:41:12 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:09.515 13:41:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:09.515 13:41:12 -- nvmf/common.sh@117 -- # sync 00:12:09.515 13:41:12 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:09.515 13:41:12 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:09.515 13:41:12 -- nvmf/common.sh@120 -- # set +e 00:12:09.515 13:41:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.515 13:41:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:09.515 rmmod nvme_rdma 00:12:09.515 rmmod nvme_fabrics 00:12:09.515 13:41:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.515 13:41:12 -- nvmf/common.sh@124 -- # set -e 00:12:09.515 13:41:12 -- nvmf/common.sh@125 -- # return 0 00:12:09.515 13:41:12 -- nvmf/common.sh@478 -- # '[' -n 1108523 ']' 00:12:09.515 13:41:12 -- nvmf/common.sh@479 -- # killprocess 1108523 00:12:09.515 13:41:12 -- common/autotest_common.sh@936 -- # '[' -z 1108523 ']' 00:12:09.515 13:41:12 -- common/autotest_common.sh@940 -- # kill -0 1108523 00:12:09.515 13:41:12 -- common/autotest_common.sh@941 -- # uname 00:12:09.515 13:41:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.515 13:41:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1108523 00:12:09.515 13:41:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:09.515 13:41:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:09.515 13:41:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1108523' 00:12:09.515 killing process with pid 1108523 00:12:09.515 13:41:12 -- common/autotest_common.sh@955 -- # kill 1108523 00:12:09.515 13:41:12 -- common/autotest_common.sh@960 -- # wait 1108523 00:12:09.773 13:41:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:09.773 13:41:12 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:09.773 00:12:09.773 real 0m14.846s 00:12:09.773 user 0m24.453s 00:12:09.773 sys 0m2.547s 00:12:09.773 13:41:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.773 13:41:12 -- common/autotest_common.sh@10 -- # set +x 00:12:09.773 ************************************ 00:12:09.773 END TEST nvmf_queue_depth 00:12:09.773 ************************************ 00:12:09.773 13:41:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:09.773 13:41:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:09.773 13:41:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.773 13:41:12 -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 ************************************ 00:12:10.037 START TEST nvmf_multipath 00:12:10.037 ************************************ 00:12:10.037 13:41:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:10.037 * Looking for test storage... 00:12:10.037 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:10.037 13:41:12 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.037 13:41:12 -- nvmf/common.sh@7 -- # uname -s 00:12:10.037 13:41:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.037 13:41:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.037 13:41:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.037 13:41:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.037 13:41:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.037 13:41:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.037 13:41:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.037 13:41:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.037 13:41:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.037 13:41:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.037 13:41:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:12:10.037 13:41:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:12:10.037 13:41:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.037 13:41:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.037 13:41:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.037 13:41:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.037 13:41:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:10.037 13:41:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.037 13:41:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.037 13:41:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.037 13:41:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.037 13:41:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.038 13:41:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.038 13:41:12 -- paths/export.sh@5 -- # export PATH 00:12:10.038 13:41:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.038 13:41:12 -- nvmf/common.sh@47 -- # : 0 00:12:10.038 13:41:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.038 13:41:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.038 13:41:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.038 13:41:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.038 13:41:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.038 13:41:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.038 13:41:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.038 13:41:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.038 13:41:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.038 13:41:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.038 13:41:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:10.038 13:41:12 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:10.038 13:41:12 -- target/multipath.sh@43 -- # nvmftestinit 00:12:10.038 13:41:12 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:10.038 13:41:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.038 13:41:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:10.038 13:41:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:10.038 13:41:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:10.038 13:41:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.038 13:41:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.038 13:41:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.038 13:41:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:10.038 13:41:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:10.038 13:41:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.038 13:41:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.368 13:41:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:13.368 13:41:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.368 13:41:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.368 13:41:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.368 13:41:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.368 13:41:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.368 13:41:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.368 13:41:15 -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.368 13:41:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.368 13:41:15 -- nvmf/common.sh@296 -- # e810=() 00:12:13.368 13:41:15 -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.368 13:41:15 -- nvmf/common.sh@297 -- # x722=() 00:12:13.368 13:41:15 -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.368 13:41:15 -- nvmf/common.sh@298 -- # mlx=() 00:12:13.368 13:41:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.368 13:41:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.368 13:41:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.368 13:41:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:13.368 13:41:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:13.368 13:41:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:13.368 13:41:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:13.368 13:41:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:13.368 13:41:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.368 13:41:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.368 13:41:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:12:13.368 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:12:13.368 13:41:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.368 13:41:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.368 13:41:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.368 13:41:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.369 13:41:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:12:13.369 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:12:13.369 13:41:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.369 13:41:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.369 13:41:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.369 13:41:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:12:13.369 Found net devices under 0000:81:00.0: mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.369 13:41:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.369 13:41:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.369 13:41:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:12:13.369 Found net devices under 0000:81:00.1: mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.369 13:41:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:13.369 13:41:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:13.369 13:41:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:13.369 13:41:15 -- nvmf/common.sh@58 -- # uname 00:12:13.369 13:41:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:13.369 13:41:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:13.369 13:41:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:13.369 13:41:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:13.369 13:41:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:13.369 13:41:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:13.369 13:41:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:13.369 13:41:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:13.369 13:41:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:13.369 13:41:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:13.369 13:41:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.369 13:41:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.369 13:41:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.369 13:41:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.369 13:41:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@105 -- # continue 2 00:12:13.369 13:41:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@105 -- # continue 2 00:12:13.369 13:41:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.369 13:41:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.369 13:41:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:13.369 13:41:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:13.369 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.369 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:12:13.369 altname enp129s0f0np0 00:12:13.369 inet 192.168.100.8/24 scope global mlx_0_0 00:12:13.369 valid_lft forever preferred_lft forever 00:12:13.369 13:41:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.369 13:41:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.369 13:41:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:13.369 13:41:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:13.369 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.369 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:12:13.369 altname enp129s0f1np1 00:12:13.369 inet 192.168.100.9/24 scope global mlx_0_1 00:12:13.369 valid_lft forever preferred_lft forever 00:12:13.369 13:41:15 -- nvmf/common.sh@411 -- # return 0 00:12:13.369 13:41:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:13.369 13:41:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:13.369 13:41:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:13.369 13:41:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.369 13:41:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.369 13:41:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.369 13:41:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.369 13:41:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.369 13:41:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@105 -- # continue 2 00:12:13.369 13:41:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.369 13:41:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.369 13:41:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@105 -- # continue 2 00:12:13.369 13:41:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.369 13:41:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.369 13:41:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.369 13:41:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.369 13:41:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.369 13:41:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:13.369 192.168.100.9' 00:12:13.369 13:41:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:13.369 192.168.100.9' 00:12:13.369 13:41:15 -- nvmf/common.sh@446 -- # head -n 1 00:12:13.369 13:41:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:13.369 13:41:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:13.369 192.168.100.9' 00:12:13.369 13:41:15 -- nvmf/common.sh@447 -- # tail -n +2 00:12:13.369 13:41:15 -- nvmf/common.sh@447 -- # head -n 1 00:12:13.369 13:41:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:13.369 13:41:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:13.369 13:41:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:13.369 13:41:15 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:12:13.369 13:41:15 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:12:13.369 13:41:15 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:12:13.369 run this test only with TCP transport for now 00:12:13.369 13:41:15 -- target/multipath.sh@53 -- # nvmftestfini 00:12:13.369 13:41:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:13.369 13:41:15 -- nvmf/common.sh@117 -- # sync 00:12:13.369 13:41:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@120 -- # set +e 00:12:13.369 13:41:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.369 13:41:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:13.369 rmmod nvme_rdma 00:12:13.369 rmmod nvme_fabrics 00:12:13.369 13:41:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.369 13:41:15 -- nvmf/common.sh@124 -- # set -e 00:12:13.369 13:41:15 -- nvmf/common.sh@125 -- # return 0 00:12:13.369 13:41:15 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:13.369 13:41:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:13.369 13:41:15 -- target/multipath.sh@54 -- # exit 0 00:12:13.370 13:41:15 -- target/multipath.sh@1 -- # nvmftestfini 00:12:13.370 13:41:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:13.370 13:41:15 -- nvmf/common.sh@117 -- # sync 00:12:13.370 13:41:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@120 -- # set +e 00:12:13.370 13:41:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.370 13:41:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:13.370 13:41:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.370 13:41:15 -- nvmf/common.sh@124 -- # set -e 00:12:13.370 13:41:15 -- nvmf/common.sh@125 -- # return 0 00:12:13.370 13:41:15 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:13.370 00:12:13.370 real 0m2.982s 00:12:13.370 user 0m0.998s 00:12:13.370 sys 0m2.079s 00:12:13.370 13:41:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:13.370 13:41:15 -- common/autotest_common.sh@10 -- # set +x 00:12:13.370 ************************************ 00:12:13.370 END TEST nvmf_multipath 00:12:13.370 ************************************ 00:12:13.370 13:41:15 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:13.370 13:41:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.370 13:41:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.370 13:41:15 -- common/autotest_common.sh@10 -- # set +x 00:12:13.370 ************************************ 00:12:13.370 START TEST nvmf_zcopy 00:12:13.370 ************************************ 00:12:13.370 13:41:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:13.370 * Looking for test storage... 00:12:13.370 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:13.370 13:41:15 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.370 13:41:15 -- nvmf/common.sh@7 -- # uname -s 00:12:13.370 13:41:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.370 13:41:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.370 13:41:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.370 13:41:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.370 13:41:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.370 13:41:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.370 13:41:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.370 13:41:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.370 13:41:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.370 13:41:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.370 13:41:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:12:13.370 13:41:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:12:13.370 13:41:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.370 13:41:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.370 13:41:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.370 13:41:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.370 13:41:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:13.370 13:41:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.370 13:41:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.370 13:41:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.370 13:41:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.370 13:41:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.370 13:41:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.370 13:41:15 -- paths/export.sh@5 -- # export PATH 00:12:13.370 13:41:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.370 13:41:15 -- nvmf/common.sh@47 -- # : 0 00:12:13.370 13:41:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.370 13:41:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.370 13:41:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.370 13:41:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.370 13:41:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.370 13:41:15 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:13.370 13:41:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:13.370 13:41:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.370 13:41:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:13.370 13:41:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:13.370 13:41:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:13.370 13:41:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.370 13:41:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.370 13:41:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.370 13:41:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:13.370 13:41:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:13.370 13:41:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.370 13:41:15 -- common/autotest_common.sh@10 -- # set +x 00:12:15.900 13:41:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:15.900 13:41:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.900 13:41:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.900 13:41:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.900 13:41:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.900 13:41:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.900 13:41:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.900 13:41:18 -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.900 13:41:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.900 13:41:18 -- nvmf/common.sh@296 -- # e810=() 00:12:15.900 13:41:18 -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.900 13:41:18 -- nvmf/common.sh@297 -- # x722=() 00:12:15.900 13:41:18 -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.900 13:41:18 -- nvmf/common.sh@298 -- # mlx=() 00:12:15.900 13:41:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.900 13:41:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.900 13:41:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.900 13:41:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:15.900 13:41:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:15.900 13:41:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:15.900 13:41:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.900 13:41:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.900 13:41:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:12:15.900 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:12:15.900 13:41:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.900 13:41:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.900 13:41:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:12:15.900 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:12:15.900 13:41:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.900 13:41:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.900 13:41:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:15.900 13:41:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.901 13:41:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.901 13:41:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.901 13:41:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:12:15.901 Found net devices under 0000:81:00.0: mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.901 13:41:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.901 13:41:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.901 13:41:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.901 13:41:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:12:15.901 Found net devices under 0000:81:00.1: mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.901 13:41:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:15.901 13:41:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:15.901 13:41:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:15.901 13:41:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:15.901 13:41:18 -- nvmf/common.sh@58 -- # uname 00:12:15.901 13:41:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:15.901 13:41:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:15.901 13:41:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:15.901 13:41:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:15.901 13:41:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:15.901 13:41:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:15.901 13:41:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:15.901 13:41:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:15.901 13:41:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:15.901 13:41:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:15.901 13:41:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:15.901 13:41:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.901 13:41:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:15.901 13:41:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:15.901 13:41:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.901 13:41:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:15.901 13:41:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@105 -- # continue 2 00:12:15.901 13:41:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@105 -- # continue 2 00:12:15.901 13:41:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:15.901 13:41:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.901 13:41:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:15.901 13:41:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:15.901 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.901 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:12:15.901 altname enp129s0f0np0 00:12:15.901 inet 192.168.100.8/24 scope global mlx_0_0 00:12:15.901 valid_lft forever preferred_lft forever 00:12:15.901 13:41:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:15.901 13:41:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.901 13:41:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:15.901 13:41:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:15.901 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.901 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:12:15.901 altname enp129s0f1np1 00:12:15.901 inet 192.168.100.9/24 scope global mlx_0_1 00:12:15.901 valid_lft forever preferred_lft forever 00:12:15.901 13:41:18 -- nvmf/common.sh@411 -- # return 0 00:12:15.901 13:41:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:15.901 13:41:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:15.901 13:41:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:15.901 13:41:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:15.901 13:41:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.901 13:41:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:15.901 13:41:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:15.901 13:41:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.901 13:41:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:15.901 13:41:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@105 -- # continue 2 00:12:15.901 13:41:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.901 13:41:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.901 13:41:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@105 -- # continue 2 00:12:15.901 13:41:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:15.901 13:41:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.901 13:41:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:15.901 13:41:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.901 13:41:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.901 13:41:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:15.901 192.168.100.9' 00:12:15.901 13:41:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:15.901 192.168.100.9' 00:12:15.901 13:41:18 -- nvmf/common.sh@446 -- # head -n 1 00:12:15.901 13:41:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:15.901 13:41:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:15.901 192.168.100.9' 00:12:15.901 13:41:18 -- nvmf/common.sh@447 -- # tail -n +2 00:12:15.901 13:41:18 -- nvmf/common.sh@447 -- # head -n 1 00:12:15.901 13:41:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:15.901 13:41:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:15.901 13:41:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:15.901 13:41:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:15.901 13:41:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:15.901 13:41:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:15.901 13:41:18 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:15.901 13:41:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:15.901 13:41:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:15.901 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 13:41:18 -- nvmf/common.sh@470 -- # nvmfpid=1114023 00:12:15.901 13:41:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:15.901 13:41:18 -- nvmf/common.sh@471 -- # waitforlisten 1114023 00:12:15.901 13:41:18 -- common/autotest_common.sh@817 -- # '[' -z 1114023 ']' 00:12:15.901 13:41:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.901 13:41:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.901 13:41:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.901 13:41:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.901 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.160 [2024-04-18 13:41:18.724545] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:16.160 [2024-04-18 13:41:18.724636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.160 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.160 [2024-04-18 13:41:18.806346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.160 [2024-04-18 13:41:18.925784] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.160 [2024-04-18 13:41:18.925855] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.160 [2024-04-18 13:41:18.925872] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.160 [2024-04-18 13:41:18.925885] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.160 [2024-04-18 13:41:18.925898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.160 [2024-04-18 13:41:18.925949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.417 13:41:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:16.418 13:41:19 -- common/autotest_common.sh@850 -- # return 0 00:12:16.418 13:41:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:16.418 13:41:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:16.418 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 13:41:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.418 13:41:19 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:12:16.418 13:41:19 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:12:16.418 Unsupported transport: rdma 00:12:16.418 13:41:19 -- target/zcopy.sh@17 -- # exit 0 00:12:16.418 13:41:19 -- target/zcopy.sh@1 -- # process_shm --id 0 00:12:16.418 13:41:19 -- common/autotest_common.sh@794 -- # type=--id 00:12:16.418 13:41:19 -- common/autotest_common.sh@795 -- # id=0 00:12:16.418 13:41:19 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:16.418 13:41:19 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:16.418 13:41:19 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:16.418 13:41:19 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:16.418 13:41:19 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:16.418 13:41:19 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:16.418 nvmf_trace.0 00:12:16.418 13:41:19 -- common/autotest_common.sh@809 -- # return 0 00:12:16.418 13:41:19 -- target/zcopy.sh@1 -- # nvmftestfini 00:12:16.418 13:41:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:16.418 13:41:19 -- nvmf/common.sh@117 -- # sync 00:12:16.418 13:41:19 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:16.418 13:41:19 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:16.418 13:41:19 -- nvmf/common.sh@120 -- # set +e 00:12:16.418 13:41:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.418 13:41:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:16.418 rmmod nvme_rdma 00:12:16.418 rmmod nvme_fabrics 00:12:16.418 13:41:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.418 13:41:19 -- nvmf/common.sh@124 -- # set -e 00:12:16.418 13:41:19 -- nvmf/common.sh@125 -- # return 0 00:12:16.418 13:41:19 -- nvmf/common.sh@478 -- # '[' -n 1114023 ']' 00:12:16.418 13:41:19 -- nvmf/common.sh@479 -- # killprocess 1114023 00:12:16.418 13:41:19 -- common/autotest_common.sh@936 -- # '[' -z 1114023 ']' 00:12:16.418 13:41:19 -- common/autotest_common.sh@940 -- # kill -0 1114023 00:12:16.418 13:41:19 -- common/autotest_common.sh@941 -- # uname 00:12:16.418 13:41:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.418 13:41:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1114023 00:12:16.418 13:41:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:16.418 13:41:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:16.418 13:41:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1114023' 00:12:16.418 killing process with pid 1114023 00:12:16.418 13:41:19 -- common/autotest_common.sh@955 -- # kill 1114023 00:12:16.418 13:41:19 -- common/autotest_common.sh@960 -- # wait 1114023 00:12:16.676 13:41:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:16.676 13:41:19 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:16.676 00:12:16.676 real 0m3.671s 00:12:16.676 user 0m1.879s 00:12:16.676 sys 0m2.331s 00:12:16.676 13:41:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.676 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:12:16.676 ************************************ 00:12:16.676 END TEST nvmf_zcopy 00:12:16.676 ************************************ 00:12:16.676 13:41:19 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:16.676 13:41:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.676 13:41:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.676 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:12:16.934 ************************************ 00:12:16.934 START TEST nvmf_nmic 00:12:16.934 ************************************ 00:12:16.934 13:41:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:16.934 * Looking for test storage... 00:12:16.934 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:16.934 13:41:19 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.934 13:41:19 -- nvmf/common.sh@7 -- # uname -s 00:12:16.934 13:41:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.934 13:41:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.934 13:41:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.934 13:41:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.934 13:41:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.934 13:41:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.934 13:41:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.934 13:41:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.934 13:41:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.934 13:41:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.934 13:41:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:12:16.934 13:41:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:12:16.934 13:41:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.934 13:41:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.934 13:41:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.934 13:41:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.934 13:41:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:16.934 13:41:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.934 13:41:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.934 13:41:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.934 13:41:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.934 13:41:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.934 13:41:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.934 13:41:19 -- paths/export.sh@5 -- # export PATH 00:12:16.934 13:41:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.934 13:41:19 -- nvmf/common.sh@47 -- # : 0 00:12:16.934 13:41:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.934 13:41:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.934 13:41:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.934 13:41:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.934 13:41:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.934 13:41:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.934 13:41:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.934 13:41:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.934 13:41:19 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.934 13:41:19 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.934 13:41:19 -- target/nmic.sh@14 -- # nvmftestinit 00:12:16.934 13:41:19 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:16.934 13:41:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.934 13:41:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:16.934 13:41:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:16.934 13:41:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:16.934 13:41:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.934 13:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.934 13:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.934 13:41:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:16.934 13:41:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:16.934 13:41:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.934 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:12:20.216 13:41:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:20.216 13:41:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.216 13:41:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.216 13:41:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.216 13:41:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.216 13:41:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.217 13:41:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.217 13:41:22 -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.217 13:41:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.217 13:41:22 -- nvmf/common.sh@296 -- # e810=() 00:12:20.217 13:41:22 -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.217 13:41:22 -- nvmf/common.sh@297 -- # x722=() 00:12:20.217 13:41:22 -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.217 13:41:22 -- nvmf/common.sh@298 -- # mlx=() 00:12:20.217 13:41:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.217 13:41:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.217 13:41:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:12:20.217 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:12:20.217 13:41:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:20.217 13:41:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:12:20.217 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:12:20.217 13:41:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:20.217 13:41:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.217 13:41:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.217 13:41:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:12:20.217 Found net devices under 0000:81:00.0: mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.217 13:41:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.217 13:41:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:12:20.217 Found net devices under 0000:81:00.1: mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.217 13:41:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:20.217 13:41:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:20.217 13:41:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:20.217 13:41:22 -- nvmf/common.sh@58 -- # uname 00:12:20.217 13:41:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:20.217 13:41:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:20.217 13:41:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:20.217 13:41:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:20.217 13:41:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:20.217 13:41:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:20.217 13:41:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:20.217 13:41:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:20.217 13:41:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:20.217 13:41:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:20.217 13:41:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:20.217 13:41:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:20.217 13:41:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:20.217 13:41:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:20.217 13:41:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:20.217 13:41:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@105 -- # continue 2 00:12:20.217 13:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@105 -- # continue 2 00:12:20.217 13:41:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:20.217 13:41:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:20.217 13:41:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:20.217 13:41:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:20.217 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:20.217 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:12:20.217 altname enp129s0f0np0 00:12:20.217 inet 192.168.100.8/24 scope global mlx_0_0 00:12:20.217 valid_lft forever preferred_lft forever 00:12:20.217 13:41:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:20.217 13:41:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:20.217 13:41:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:20.217 13:41:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:20.217 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:20.217 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:12:20.217 altname enp129s0f1np1 00:12:20.217 inet 192.168.100.9/24 scope global mlx_0_1 00:12:20.217 valid_lft forever preferred_lft forever 00:12:20.217 13:41:22 -- nvmf/common.sh@411 -- # return 0 00:12:20.217 13:41:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:20.217 13:41:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:20.217 13:41:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:20.217 13:41:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:20.217 13:41:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:20.217 13:41:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:20.217 13:41:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:20.217 13:41:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:20.217 13:41:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:20.217 13:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@105 -- # continue 2 00:12:20.217 13:41:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.217 13:41:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:20.217 13:41:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@105 -- # continue 2 00:12:20.217 13:41:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:20.217 13:41:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:20.217 13:41:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:20.217 13:41:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:20.217 13:41:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:20.217 13:41:22 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:20.217 192.168.100.9' 00:12:20.217 13:41:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:20.217 192.168.100.9' 00:12:20.217 13:41:22 -- nvmf/common.sh@446 -- # head -n 1 00:12:20.217 13:41:22 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:20.217 13:41:22 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:20.217 192.168.100.9' 00:12:20.217 13:41:22 -- nvmf/common.sh@447 -- # tail -n +2 00:12:20.217 13:41:22 -- nvmf/common.sh@447 -- # head -n 1 00:12:20.217 13:41:22 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:20.217 13:41:22 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:20.217 13:41:22 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:20.217 13:41:22 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:20.218 13:41:22 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:20.218 13:41:22 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:20.218 13:41:22 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:20.218 13:41:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:20.218 13:41:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:20.218 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:12:20.218 13:41:22 -- nvmf/common.sh@470 -- # nvmfpid=1116120 00:12:20.218 13:41:22 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.218 13:41:22 -- nvmf/common.sh@471 -- # waitforlisten 1116120 00:12:20.218 13:41:22 -- common/autotest_common.sh@817 -- # '[' -z 1116120 ']' 00:12:20.218 13:41:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.218 13:41:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:20.218 13:41:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.218 13:41:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:20.218 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:12:20.218 [2024-04-18 13:41:22.529685] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:20.218 [2024-04-18 13:41:22.529777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.218 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.218 [2024-04-18 13:41:22.612921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.218 [2024-04-18 13:41:22.736863] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.218 [2024-04-18 13:41:22.736929] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.218 [2024-04-18 13:41:22.736955] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.218 [2024-04-18 13:41:22.736969] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.218 [2024-04-18 13:41:22.736981] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.218 [2024-04-18 13:41:22.737051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.218 [2024-04-18 13:41:22.737107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.218 [2024-04-18 13:41:22.737165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.218 [2024-04-18 13:41:22.737161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.815 13:41:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:20.815 13:41:23 -- common/autotest_common.sh@850 -- # return 0 00:12:20.815 13:41:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:20.815 13:41:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:20.815 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:20.815 13:41:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.815 13:41:23 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:20.815 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.815 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:20.815 [2024-04-18 13:41:23.572949] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcb2090/0xcb6580) succeed. 00:12:20.815 [2024-04-18 13:41:23.585312] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcb3680/0xcf7c10) succeed. 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 Malloc0 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 [2024-04-18 13:41:23.799782] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:21.087 test case1: single bdev can't be used in multiple subsystems 00:12:21.087 13:41:23 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@28 -- # nmic_status=0 00:12:21.087 13:41:23 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 [2024-04-18 13:41:23.823565] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:21.087 [2024-04-18 13:41:23.823599] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:21.087 [2024-04-18 13:41:23.823617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.087 request: 00:12:21.087 { 00:12:21.087 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:21.087 "namespace": { 00:12:21.087 "bdev_name": "Malloc0", 00:12:21.087 "no_auto_visible": false 00:12:21.087 }, 00:12:21.087 "method": "nvmf_subsystem_add_ns", 00:12:21.087 "req_id": 1 00:12:21.087 } 00:12:21.087 Got JSON-RPC error response 00:12:21.087 response: 00:12:21.087 { 00:12:21.087 "code": -32602, 00:12:21.087 "message": "Invalid parameters" 00:12:21.087 } 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@29 -- # nmic_status=1 00:12:21.087 13:41:23 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:21.087 13:41:23 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:21.087 Adding namespace failed - expected result. 00:12:21.087 13:41:23 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:21.087 test case2: host connect to nvmf target in multiple paths 00:12:21.087 13:41:23 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:12:21.087 13:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.087 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.087 [2024-04-18 13:41:23.831626] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:12:21.087 13:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.087 13:41:23 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:22.465 13:41:24 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:12:23.396 13:41:26 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.396 13:41:26 -- common/autotest_common.sh@1184 -- # local i=0 00:12:23.396 13:41:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.396 13:41:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:23.396 13:41:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:25.290 13:41:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:25.290 13:41:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:25.290 13:41:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.290 13:41:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:25.290 13:41:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.290 13:41:28 -- common/autotest_common.sh@1194 -- # return 0 00:12:25.290 13:41:28 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:25.290 [global] 00:12:25.290 thread=1 00:12:25.290 invalidate=1 00:12:25.290 rw=write 00:12:25.290 time_based=1 00:12:25.290 runtime=1 00:12:25.290 ioengine=libaio 00:12:25.290 direct=1 00:12:25.290 bs=4096 00:12:25.290 iodepth=1 00:12:25.290 norandommap=0 00:12:25.290 numjobs=1 00:12:25.290 00:12:25.290 verify_dump=1 00:12:25.290 verify_backlog=512 00:12:25.290 verify_state_save=0 00:12:25.290 do_verify=1 00:12:25.290 verify=crc32c-intel 00:12:25.290 [job0] 00:12:25.290 filename=/dev/nvme0n1 00:12:25.290 Could not set queue depth (nvme0n1) 00:12:25.547 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.547 fio-3.35 00:12:25.547 Starting 1 thread 00:12:26.919 00:12:26.919 job0: (groupid=0, jobs=1): err= 0: pid=1116894: Thu Apr 18 13:41:29 2024 00:12:26.919 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:12:26.919 slat (nsec): min=5005, max=30304, avg=9330.99, stdev=3420.73 00:12:26.919 clat (usec): min=53, max=124, avg=70.52, stdev= 9.87 00:12:26.919 lat (usec): min=58, max=133, avg=79.85, stdev=11.59 00:12:26.919 clat percentiles (usec): 00:12:26.919 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:12:26.919 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:12:26.919 | 70.00th=[ 73], 80.00th=[ 76], 90.00th=[ 86], 95.00th=[ 92], 00:12:26.919 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 116], 00:12:26.919 | 99.99th=[ 125] 00:12:26.919 write: IOPS=6234, BW=24.4MiB/s (25.5MB/s)(24.4MiB/1001msec); 0 zone resets 00:12:26.919 slat (nsec): min=5616, max=33415, avg=10449.16, stdev=3924.13 00:12:26.919 clat (usec): min=48, max=117, avg=65.67, stdev= 9.89 00:12:26.919 lat (usec): min=55, max=137, avg=76.11, stdev=12.07 00:12:26.919 clat percentiles (usec): 00:12:26.919 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:12:26.919 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 67], 00:12:26.919 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 87], 00:12:26.919 | 99.00th=[ 98], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 110], 00:12:26.919 | 99.99th=[ 119] 00:12:26.919 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:12:26.919 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:12:26.919 lat (usec) : 50=0.02%, 100=98.90%, 250=1.07% 00:12:26.919 cpu : usr=9.00%, sys=15.50%, ctx=12386, majf=0, minf=2 00:12:26.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.919 issued rwts: total=6144,6241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:26.919 00:12:26.919 Run status group 0 (all jobs): 00:12:26.919 READ: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:12:26.919 WRITE: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=24.4MiB (25.6MB), run=1001-1001msec 00:12:26.919 00:12:26.919 Disk stats (read/write): 00:12:26.919 nvme0n1: ios=5632/5632, merge=0/0, ticks=419/377, in_queue=796, util=90.78% 00:12:26.919 13:41:29 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:28.853 13:41:31 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.853 13:41:31 -- common/autotest_common.sh@1205 -- # local i=0 00:12:28.853 13:41:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:28.853 13:41:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.853 13:41:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:28.853 13:41:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.853 13:41:31 -- common/autotest_common.sh@1217 -- # return 0 00:12:28.853 13:41:31 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:28.853 13:41:31 -- target/nmic.sh@53 -- # nvmftestfini 00:12:28.853 13:41:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:28.853 13:41:31 -- nvmf/common.sh@117 -- # sync 00:12:28.853 13:41:31 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:28.853 13:41:31 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:28.853 13:41:31 -- nvmf/common.sh@120 -- # set +e 00:12:28.853 13:41:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.853 13:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:28.853 rmmod nvme_rdma 00:12:29.109 rmmod nvme_fabrics 00:12:29.109 13:41:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.109 13:41:31 -- nvmf/common.sh@124 -- # set -e 00:12:29.109 13:41:31 -- nvmf/common.sh@125 -- # return 0 00:12:29.109 13:41:31 -- nvmf/common.sh@478 -- # '[' -n 1116120 ']' 00:12:29.109 13:41:31 -- nvmf/common.sh@479 -- # killprocess 1116120 00:12:29.109 13:41:31 -- common/autotest_common.sh@936 -- # '[' -z 1116120 ']' 00:12:29.109 13:41:31 -- common/autotest_common.sh@940 -- # kill -0 1116120 00:12:29.109 13:41:31 -- common/autotest_common.sh@941 -- # uname 00:12:29.109 13:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.109 13:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1116120 00:12:29.109 13:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.109 13:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.109 13:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1116120' 00:12:29.109 killing process with pid 1116120 00:12:29.109 13:41:31 -- common/autotest_common.sh@955 -- # kill 1116120 00:12:29.109 13:41:31 -- common/autotest_common.sh@960 -- # wait 1116120 00:12:29.367 13:41:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:29.367 13:41:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:29.367 00:12:29.367 real 0m12.530s 00:12:29.367 user 0m38.803s 00:12:29.367 sys 0m2.744s 00:12:29.367 13:41:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.367 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:12:29.367 ************************************ 00:12:29.367 END TEST nvmf_nmic 00:12:29.367 ************************************ 00:12:29.367 13:41:32 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:29.367 13:41:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.367 13:41:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.367 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:12:29.625 ************************************ 00:12:29.625 START TEST nvmf_fio_target 00:12:29.626 ************************************ 00:12:29.626 13:41:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:29.626 * Looking for test storage... 00:12:29.626 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:29.626 13:41:32 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.626 13:41:32 -- nvmf/common.sh@7 -- # uname -s 00:12:29.626 13:41:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.626 13:41:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.626 13:41:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.626 13:41:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.626 13:41:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.626 13:41:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.626 13:41:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.626 13:41:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.626 13:41:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.626 13:41:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.626 13:41:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:12:29.626 13:41:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:12:29.626 13:41:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.626 13:41:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.626 13:41:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.626 13:41:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.626 13:41:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:29.626 13:41:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.626 13:41:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.626 13:41:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.626 13:41:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.626 13:41:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.626 13:41:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.626 13:41:32 -- paths/export.sh@5 -- # export PATH 00:12:29.626 13:41:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.626 13:41:32 -- nvmf/common.sh@47 -- # : 0 00:12:29.626 13:41:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.626 13:41:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.626 13:41:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.626 13:41:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.626 13:41:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.626 13:41:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.626 13:41:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.626 13:41:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.626 13:41:32 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.626 13:41:32 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.626 13:41:32 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:29.626 13:41:32 -- target/fio.sh@16 -- # nvmftestinit 00:12:29.626 13:41:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:29.626 13:41:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.626 13:41:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:29.626 13:41:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:29.626 13:41:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:29.626 13:41:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.626 13:41:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.626 13:41:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.626 13:41:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:29.626 13:41:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:29.626 13:41:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.626 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:12:32.908 13:41:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:32.908 13:41:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.908 13:41:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.908 13:41:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.908 13:41:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.908 13:41:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.908 13:41:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.908 13:41:34 -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.908 13:41:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.908 13:41:34 -- nvmf/common.sh@296 -- # e810=() 00:12:32.908 13:41:34 -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.908 13:41:34 -- nvmf/common.sh@297 -- # x722=() 00:12:32.908 13:41:34 -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.908 13:41:34 -- nvmf/common.sh@298 -- # mlx=() 00:12:32.908 13:41:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.908 13:41:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.908 13:41:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.908 13:41:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.908 13:41:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:12:32.908 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:12:32.908 13:41:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.908 13:41:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.908 13:41:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:12:32.908 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:12:32.908 13:41:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.908 13:41:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.908 13:41:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.908 13:41:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.908 13:41:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:32.908 13:41:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.908 13:41:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:12:32.908 Found net devices under 0000:81:00.0: mlx_0_0 00:12:32.908 13:41:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.908 13:41:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.908 13:41:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:32.908 13:41:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.908 13:41:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:12:32.908 Found net devices under 0000:81:00.1: mlx_0_1 00:12:32.908 13:41:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.908 13:41:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:32.908 13:41:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:32.908 13:41:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:32.908 13:41:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:32.908 13:41:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:32.908 13:41:34 -- nvmf/common.sh@58 -- # uname 00:12:32.908 13:41:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:32.908 13:41:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:32.908 13:41:35 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:32.908 13:41:35 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:32.908 13:41:35 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:32.908 13:41:35 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:32.908 13:41:35 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:32.908 13:41:35 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:32.908 13:41:35 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:32.908 13:41:35 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:32.908 13:41:35 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:32.908 13:41:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.908 13:41:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.908 13:41:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.908 13:41:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.908 13:41:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.908 13:41:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.908 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.908 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@105 -- # continue 2 00:12:32.909 13:41:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@105 -- # continue 2 00:12:32.909 13:41:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.909 13:41:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.909 13:41:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:32.909 13:41:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:32.909 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.909 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:12:32.909 altname enp129s0f0np0 00:12:32.909 inet 192.168.100.8/24 scope global mlx_0_0 00:12:32.909 valid_lft forever preferred_lft forever 00:12:32.909 13:41:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.909 13:41:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.909 13:41:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:32.909 13:41:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:32.909 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.909 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:12:32.909 altname enp129s0f1np1 00:12:32.909 inet 192.168.100.9/24 scope global mlx_0_1 00:12:32.909 valid_lft forever preferred_lft forever 00:12:32.909 13:41:35 -- nvmf/common.sh@411 -- # return 0 00:12:32.909 13:41:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:32.909 13:41:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.909 13:41:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:32.909 13:41:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:32.909 13:41:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.909 13:41:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.909 13:41:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.909 13:41:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.909 13:41:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.909 13:41:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@105 -- # continue 2 00:12:32.909 13:41:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.909 13:41:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.909 13:41:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@105 -- # continue 2 00:12:32.909 13:41:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.909 13:41:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.909 13:41:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.909 13:41:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.909 13:41:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.909 13:41:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:32.909 192.168.100.9' 00:12:32.909 13:41:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:32.909 192.168.100.9' 00:12:32.909 13:41:35 -- nvmf/common.sh@446 -- # head -n 1 00:12:32.909 13:41:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:32.909 13:41:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:32.909 192.168.100.9' 00:12:32.909 13:41:35 -- nvmf/common.sh@447 -- # tail -n +2 00:12:32.909 13:41:35 -- nvmf/common.sh@447 -- # head -n 1 00:12:32.909 13:41:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:32.909 13:41:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:32.909 13:41:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.909 13:41:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:32.909 13:41:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:32.909 13:41:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:32.909 13:41:35 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:32.909 13:41:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:32.909 13:41:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:32.909 13:41:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.909 13:41:35 -- nvmf/common.sh@470 -- # nvmfpid=1119388 00:12:32.909 13:41:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.909 13:41:35 -- nvmf/common.sh@471 -- # waitforlisten 1119388 00:12:32.909 13:41:35 -- common/autotest_common.sh@817 -- # '[' -z 1119388 ']' 00:12:32.909 13:41:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.909 13:41:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:32.909 13:41:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.909 13:41:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:32.909 13:41:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.909 [2024-04-18 13:41:35.177277] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:32.909 [2024-04-18 13:41:35.177365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.909 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.909 [2024-04-18 13:41:35.256785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.909 [2024-04-18 13:41:35.382527] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.909 [2024-04-18 13:41:35.382587] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.909 [2024-04-18 13:41:35.382604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.909 [2024-04-18 13:41:35.382617] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.909 [2024-04-18 13:41:35.382629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.909 [2024-04-18 13:41:35.382714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.909 [2024-04-18 13:41:35.382767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.909 [2024-04-18 13:41:35.382795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.909 [2024-04-18 13:41:35.382798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.909 13:41:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.909 13:41:35 -- common/autotest_common.sh@850 -- # return 0 00:12:32.909 13:41:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:32.909 13:41:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:32.909 13:41:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.909 13:41:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.909 13:41:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:33.167 [2024-04-18 13:41:35.881669] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1477090/0x147b580) succeed. 00:12:33.167 [2024-04-18 13:41:35.893828] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1478680/0x14bcc10) succeed. 00:12:33.424 13:41:36 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:33.988 13:41:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:33.988 13:41:36 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.246 13:41:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:34.246 13:41:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.503 13:41:37 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:34.503 13:41:37 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.067 13:41:37 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:35.067 13:41:37 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:35.323 13:41:37 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.580 13:41:38 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:35.580 13:41:38 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:36.143 13:41:38 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:36.143 13:41:38 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:36.707 13:41:39 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:36.707 13:41:39 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:36.998 13:41:39 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.255 13:41:39 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:37.255 13:41:39 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.512 13:41:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:37.512 13:41:40 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.770 13:41:40 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:38.027 [2024-04-18 13:41:40.774560] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:38.027 13:41:40 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:38.591 13:41:41 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:38.848 13:41:41 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:39.780 13:41:42 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:39.780 13:41:42 -- common/autotest_common.sh@1184 -- # local i=0 00:12:39.780 13:41:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.780 13:41:42 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:12:39.780 13:41:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:12:39.780 13:41:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:41.675 13:41:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:41.675 13:41:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:41.675 13:41:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.932 13:41:44 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:12:41.932 13:41:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.932 13:41:44 -- common/autotest_common.sh@1194 -- # return 0 00:12:41.932 13:41:44 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:41.932 [global] 00:12:41.932 thread=1 00:12:41.932 invalidate=1 00:12:41.932 rw=write 00:12:41.932 time_based=1 00:12:41.932 runtime=1 00:12:41.932 ioengine=libaio 00:12:41.932 direct=1 00:12:41.932 bs=4096 00:12:41.932 iodepth=1 00:12:41.932 norandommap=0 00:12:41.932 numjobs=1 00:12:41.932 00:12:41.932 verify_dump=1 00:12:41.932 verify_backlog=512 00:12:41.932 verify_state_save=0 00:12:41.932 do_verify=1 00:12:41.932 verify=crc32c-intel 00:12:41.932 [job0] 00:12:41.932 filename=/dev/nvme0n1 00:12:41.932 [job1] 00:12:41.932 filename=/dev/nvme0n2 00:12:41.932 [job2] 00:12:41.932 filename=/dev/nvme0n3 00:12:41.932 [job3] 00:12:41.932 filename=/dev/nvme0n4 00:12:41.932 Could not set queue depth (nvme0n1) 00:12:41.932 Could not set queue depth (nvme0n2) 00:12:41.932 Could not set queue depth (nvme0n3) 00:12:41.932 Could not set queue depth (nvme0n4) 00:12:41.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.932 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.932 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.932 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.932 fio-3.35 00:12:41.932 Starting 4 threads 00:12:43.304 00:12:43.304 job0: (groupid=0, jobs=1): err= 0: pid=1120727: Thu Apr 18 13:41:45 2024 00:12:43.304 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:43.304 slat (nsec): min=5128, max=22447, avg=6582.82, stdev=1620.32 00:12:43.304 clat (usec): min=75, max=330, avg=111.84, stdev=45.66 00:12:43.304 lat (usec): min=81, max=342, avg=118.43, stdev=46.20 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:12:43.304 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:12:43.304 | 70.00th=[ 96], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 202], 00:12:43.304 | 99.00th=[ 273], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 326], 00:12:43.304 | 99.99th=[ 330] 00:12:43.304 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(17.2MiB/1001msec); 0 zone resets 00:12:43.304 slat (nsec): min=5763, max=38995, avg=7382.78, stdev=1467.28 00:12:43.304 clat (usec): min=70, max=333, avg=105.52, stdev=46.66 00:12:43.304 lat (usec): min=76, max=343, avg=112.90, stdev=47.11 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 78], 00:12:43.304 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:12:43.304 | 70.00th=[ 89], 80.00th=[ 161], 90.00th=[ 190], 95.00th=[ 202], 00:12:43.304 | 99.00th=[ 233], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 322], 00:12:43.304 | 99.99th=[ 334] 00:12:43.304 bw ( KiB/s): min=22600, max=22600, per=34.51%, avg=22600.00, stdev= 0.00, samples=1 00:12:43.304 iops : min= 5650, max= 5650, avg=5650.00, stdev= 0.00, samples=1 00:12:43.304 lat (usec) : 100=74.72%, 250=24.29%, 500=0.99% 00:12:43.304 cpu : usr=4.90%, sys=7.10%, ctx=8510, majf=0, minf=2 00:12:43.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 issued rwts: total=4096,4414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.304 job1: (groupid=0, jobs=1): err= 0: pid=1120728: Thu Apr 18 13:41:45 2024 00:12:43.304 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:43.304 slat (nsec): min=5245, max=23182, avg=6443.09, stdev=855.61 00:12:43.304 clat (usec): min=64, max=342, avg=89.76, stdev=25.05 00:12:43.304 lat (usec): min=70, max=349, avg=96.21, stdev=25.24 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 75], 00:12:43.304 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:12:43.304 | 70.00th=[ 89], 80.00th=[ 92], 90.00th=[ 103], 95.00th=[ 153], 00:12:43.304 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 221], 99.95th=[ 269], 00:12:43.304 | 99.99th=[ 343] 00:12:43.304 write: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1002msec); 0 zone resets 00:12:43.304 slat (nsec): min=5796, max=32726, avg=7322.57, stdev=911.82 00:12:43.304 clat (usec): min=57, max=455, avg=84.08, stdev=22.81 00:12:43.304 lat (usec): min=65, max=462, avg=91.40, stdev=23.02 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 71], 00:12:43.304 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 81], 00:12:43.304 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 147], 00:12:43.304 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 198], 99.95th=[ 210], 00:12:43.304 | 99.99th=[ 457] 00:12:43.304 bw ( KiB/s): min=20480, max=22216, per=32.59%, avg=21348.00, stdev=1227.54, samples=2 00:12:43.304 iops : min= 5120, max= 5554, avg=5337.00, stdev=306.88, samples=2 00:12:43.304 lat (usec) : 100=89.37%, 250=10.60%, 500=0.04% 00:12:43.304 cpu : usr=6.09%, sys=8.39%, ctx=10459, majf=0, minf=1 00:12:43.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 issued rwts: total=5120,5337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.304 job2: (groupid=0, jobs=1): err= 0: pid=1120729: Thu Apr 18 13:41:45 2024 00:12:43.304 read: IOPS=2913, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:12:43.304 slat (nsec): min=5415, max=39089, avg=10777.14, stdev=5447.05 00:12:43.304 clat (usec): min=88, max=496, avg=155.92, stdev=35.37 00:12:43.304 lat (usec): min=112, max=503, avg=166.70, stdev=32.16 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 115], 00:12:43.304 | 30.00th=[ 128], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 165], 00:12:43.304 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 210], 00:12:43.304 | 99.00th=[ 235], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 285], 00:12:43.304 | 99.99th=[ 498] 00:12:43.304 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:43.304 slat (nsec): min=6499, max=44791, avg=11028.78, stdev=5053.92 00:12:43.304 clat (usec): min=86, max=288, avg=150.96, stdev=31.89 00:12:43.304 lat (usec): min=103, max=297, avg=161.99, stdev=29.21 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 116], 00:12:43.304 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:12:43.304 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 196], 95.00th=[ 204], 00:12:43.304 | 99.00th=[ 225], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 269], 00:12:43.304 | 99.99th=[ 289] 00:12:43.304 bw ( KiB/s): min=14152, max=14152, per=21.61%, avg=14152.00, stdev= 0.00, samples=1 00:12:43.304 iops : min= 3538, max= 3538, avg=3538.00, stdev= 0.00, samples=1 00:12:43.304 lat (usec) : 100=3.29%, 250=96.21%, 500=0.50% 00:12:43.304 cpu : usr=4.10%, sys=7.50%, ctx=5988, majf=0, minf=1 00:12:43.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 issued rwts: total=2916,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.304 job3: (groupid=0, jobs=1): err= 0: pid=1120730: Thu Apr 18 13:41:45 2024 00:12:43.304 read: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:12:43.304 slat (nsec): min=5237, max=26507, avg=6930.38, stdev=1458.06 00:12:43.304 clat (usec): min=87, max=305, avg=144.50, stdev=42.16 00:12:43.304 lat (usec): min=94, max=313, avg=151.43, stdev=42.64 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 104], 00:12:43.304 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 131], 60.00th=[ 165], 00:12:43.304 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 208], 00:12:43.304 | 99.00th=[ 237], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 285], 00:12:43.304 | 99.99th=[ 306] 00:12:43.304 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:43.304 slat (nsec): min=5917, max=27756, avg=7857.34, stdev=1225.29 00:12:43.304 clat (usec): min=80, max=278, avg=135.40, stdev=37.98 00:12:43.304 lat (usec): min=87, max=288, avg=143.25, stdev=38.35 00:12:43.304 clat percentiles (usec): 00:12:43.304 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:12:43.304 | 30.00th=[ 99], 40.00th=[ 106], 50.00th=[ 149], 60.00th=[ 155], 00:12:43.304 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 198], 00:12:43.304 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 265], 99.95th=[ 277], 00:12:43.304 | 99.99th=[ 277] 00:12:43.304 bw ( KiB/s): min=16384, max=16384, per=25.01%, avg=16384.00, stdev= 0.00, samples=1 00:12:43.304 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:43.304 lat (usec) : 100=22.20%, 250=77.34%, 500=0.46% 00:12:43.304 cpu : usr=2.50%, sys=7.40%, ctx=6711, majf=0, minf=1 00:12:43.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.305 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.305 00:12:43.305 Run status group 0 (all jobs): 00:12:43.305 READ: bw=59.5MiB/s (62.4MB/s), 11.4MiB/s-20.0MiB/s (11.9MB/s-20.9MB/s), io=59.6MiB (62.5MB), run=1001-1002msec 00:12:43.305 WRITE: bw=64.0MiB/s (67.1MB/s), 12.0MiB/s-20.8MiB/s (12.6MB/s-21.8MB/s), io=64.1MiB (67.2MB), run=1001-1002msec 00:12:43.305 00:12:43.305 Disk stats (read/write): 00:12:43.305 nvme0n1: ios=3634/4069, merge=0/0, ticks=354/390, in_queue=744, util=85.07% 00:12:43.305 nvme0n2: ios=4096/4434, merge=0/0, ticks=359/376, in_queue=735, util=85.82% 00:12:43.305 nvme0n3: ios=2560/2569, merge=0/0, ticks=385/365, in_queue=750, util=88.62% 00:12:43.305 nvme0n4: ios=2778/3072, merge=0/0, ticks=384/374, in_queue=758, util=89.56% 00:12:43.305 13:41:45 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:43.305 [global] 00:12:43.305 thread=1 00:12:43.305 invalidate=1 00:12:43.305 rw=randwrite 00:12:43.305 time_based=1 00:12:43.305 runtime=1 00:12:43.305 ioengine=libaio 00:12:43.305 direct=1 00:12:43.305 bs=4096 00:12:43.305 iodepth=1 00:12:43.305 norandommap=0 00:12:43.305 numjobs=1 00:12:43.305 00:12:43.305 verify_dump=1 00:12:43.305 verify_backlog=512 00:12:43.305 verify_state_save=0 00:12:43.305 do_verify=1 00:12:43.305 verify=crc32c-intel 00:12:43.305 [job0] 00:12:43.305 filename=/dev/nvme0n1 00:12:43.305 [job1] 00:12:43.305 filename=/dev/nvme0n2 00:12:43.305 [job2] 00:12:43.305 filename=/dev/nvme0n3 00:12:43.305 [job3] 00:12:43.305 filename=/dev/nvme0n4 00:12:43.305 Could not set queue depth (nvme0n1) 00:12:43.305 Could not set queue depth (nvme0n2) 00:12:43.305 Could not set queue depth (nvme0n3) 00:12:43.305 Could not set queue depth (nvme0n4) 00:12:43.622 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.622 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.622 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.622 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.622 fio-3.35 00:12:43.622 Starting 4 threads 00:12:44.992 00:12:44.992 job0: (groupid=0, jobs=1): err= 0: pid=1120952: Thu Apr 18 13:41:47 2024 00:12:44.992 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:44.992 slat (nsec): min=4935, max=28664, avg=6115.56, stdev=1741.21 00:12:44.992 clat (usec): min=67, max=231, avg=96.73, stdev=31.04 00:12:44.992 lat (usec): min=73, max=237, avg=102.85, stdev=31.33 00:12:44.992 clat percentiles (usec): 00:12:44.992 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:12:44.992 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 85], 00:12:44.992 | 70.00th=[ 95], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 161], 00:12:44.992 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 215], 99.95th=[ 221], 00:12:44.992 | 99.99th=[ 231] 00:12:44.992 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1001msec); 0 zone resets 00:12:44.992 slat (nsec): min=5602, max=38706, avg=6796.44, stdev=1922.64 00:12:44.992 clat (usec): min=62, max=401, avg=92.94, stdev=30.97 00:12:44.992 lat (usec): min=68, max=408, avg=99.73, stdev=31.41 00:12:44.992 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:12:44.993 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 84], 00:12:44.993 | 70.00th=[ 95], 80.00th=[ 127], 90.00th=[ 137], 95.00th=[ 155], 00:12:44.993 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 202], 99.95th=[ 210], 00:12:44.993 | 99.99th=[ 400] 00:12:44.993 bw ( KiB/s): min=16384, max=16384, per=23.13%, avg=16384.00, stdev= 0.00, samples=1 00:12:44.993 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:44.993 lat (usec) : 100=73.74%, 250=26.25%, 500=0.01% 00:12:44.993 cpu : usr=4.20%, sys=5.80%, ctx=9704, majf=0, minf=1 00:12:44.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 issued rwts: total=4608,5095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.993 job1: (groupid=0, jobs=1): err= 0: pid=1120954: Thu Apr 18 13:41:47 2024 00:12:44.993 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:44.993 slat (nsec): min=6610, max=26605, avg=7682.55, stdev=1303.28 00:12:44.993 clat (usec): min=68, max=217, avg=107.87, stdev=30.99 00:12:44.993 lat (usec): min=75, max=224, avg=115.55, stdev=31.28 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 79], 00:12:44.993 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 99], 60.00th=[ 123], 00:12:44.993 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 149], 95.00th=[ 161], 00:12:44.993 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 208], 99.95th=[ 212], 00:12:44.993 | 99.99th=[ 219] 00:12:44.993 write: IOPS=4538, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1001msec); 0 zone resets 00:12:44.993 slat (nsec): min=7721, max=49987, avg=8982.16, stdev=1630.34 00:12:44.993 clat (usec): min=63, max=214, avg=102.38, stdev=32.09 00:12:44.993 lat (usec): min=71, max=223, avg=111.36, stdev=32.28 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:12:44.993 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 117], 00:12:44.993 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 145], 95.00th=[ 161], 00:12:44.993 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 208], 99.95th=[ 210], 00:12:44.993 | 99.99th=[ 215] 00:12:44.993 bw ( KiB/s): min=20664, max=20664, per=29.17%, avg=20664.00, stdev= 0.00, samples=1 00:12:44.993 iops : min= 5166, max= 5166, avg=5166.00, stdev= 0.00, samples=1 00:12:44.993 lat (usec) : 100=53.21%, 250=46.79% 00:12:44.993 cpu : usr=6.10%, sys=9.40%, ctx=8639, majf=0, minf=2 00:12:44.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 issued rwts: total=4096,4543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.993 job2: (groupid=0, jobs=1): err= 0: pid=1120956: Thu Apr 18 13:41:47 2024 00:12:44.993 read: IOPS=3323, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec) 00:12:44.993 slat (nsec): min=5399, max=34482, avg=6986.10, stdev=1447.38 00:12:44.993 clat (usec): min=74, max=266, avg=139.13, stdev=14.81 00:12:44.993 lat (usec): min=92, max=272, avg=146.12, stdev=14.85 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 96], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 00:12:44.993 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:12:44.993 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:12:44.993 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 202], 99.95th=[ 208], 00:12:44.993 | 99.99th=[ 269] 00:12:44.993 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:44.993 slat (nsec): min=6376, max=38765, avg=7706.53, stdev=1278.25 00:12:44.993 clat (usec): min=79, max=392, avg=131.71, stdev=13.76 00:12:44.993 lat (usec): min=87, max=399, avg=139.42, stdev=13.82 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 93], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:12:44.993 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:12:44.993 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 155], 00:12:44.993 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 196], 99.95th=[ 212], 00:12:44.993 | 99.99th=[ 392] 00:12:44.993 bw ( KiB/s): min=15040, max=15040, per=21.23%, avg=15040.00, stdev= 0.00, samples=1 00:12:44.993 iops : min= 3760, max= 3760, avg=3760.00, stdev= 0.00, samples=1 00:12:44.993 lat (usec) : 100=1.49%, 250=98.48%, 500=0.03% 00:12:44.993 cpu : usr=2.60%, sys=7.80%, ctx=6911, majf=0, minf=1 00:12:44.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 issued rwts: total=3327,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.993 job3: (groupid=0, jobs=1): err= 0: pid=1120957: Thu Apr 18 13:41:47 2024 00:12:44.993 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec) 00:12:44.993 slat (nsec): min=6055, max=43178, avg=13172.04, stdev=6091.61 00:12:44.993 clat (usec): min=79, max=232, avg=102.47, stdev=12.33 00:12:44.993 lat (usec): min=86, max=244, avg=115.64, stdev=15.17 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 93], 00:12:44.993 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 103], 00:12:44.993 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 119], 95.00th=[ 126], 00:12:44.993 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 163], 99.95th=[ 174], 00:12:44.993 | 99.99th=[ 233] 00:12:44.993 write: IOPS=4506, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1000msec); 0 zone resets 00:12:44.993 slat (nsec): min=6893, max=45458, avg=14499.16, stdev=6161.69 00:12:44.993 clat (usec): min=69, max=154, avg=95.69, stdev=11.23 00:12:44.993 lat (usec): min=81, max=193, avg=110.19, stdev=14.08 00:12:44.993 clat percentiles (usec): 00:12:44.993 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:12:44.993 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:12:44.993 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 113], 95.00th=[ 118], 00:12:44.993 | 99.00th=[ 130], 99.50th=[ 135], 99.90th=[ 151], 99.95th=[ 151], 00:12:44.993 | 99.99th=[ 155] 00:12:44.993 bw ( KiB/s): min=16384, max=16384, per=23.13%, avg=16384.00, stdev= 0.00, samples=1 00:12:44.993 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:44.993 lat (usec) : 100=62.22%, 250=37.78% 00:12:44.993 cpu : usr=7.30%, sys=14.20%, ctx=8602, majf=0, minf=1 00:12:44.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.993 issued rwts: total=4096,4506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.993 00:12:44.993 Run status group 0 (all jobs): 00:12:44.993 READ: bw=62.9MiB/s (66.0MB/s), 13.0MiB/s-18.0MiB/s (13.6MB/s-18.9MB/s), io=63.0MiB (66.1MB), run=1000-1001msec 00:12:44.993 WRITE: bw=69.2MiB/s (72.5MB/s), 14.0MiB/s-19.9MiB/s (14.7MB/s-20.8MB/s), io=69.2MiB (72.6MB), run=1000-1001msec 00:12:44.993 00:12:44.993 Disk stats (read/write): 00:12:44.993 nvme0n1: ios=3876/4096, merge=0/0, ticks=380/384, in_queue=764, util=85.57% 00:12:44.993 nvme0n2: ios=3584/3975, merge=0/0, ticks=374/393, in_queue=767, util=86.16% 00:12:44.993 nvme0n3: ios=2760/3072, merge=0/0, ticks=385/393, in_queue=778, util=88.78% 00:12:44.993 nvme0n4: ios=3514/3584, merge=0/0, ticks=365/339, in_queue=704, util=89.53% 00:12:44.993 13:41:47 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:44.993 [global] 00:12:44.993 thread=1 00:12:44.993 invalidate=1 00:12:44.993 rw=write 00:12:44.993 time_based=1 00:12:44.993 runtime=1 00:12:44.993 ioengine=libaio 00:12:44.993 direct=1 00:12:44.993 bs=4096 00:12:44.993 iodepth=128 00:12:44.993 norandommap=0 00:12:44.993 numjobs=1 00:12:44.993 00:12:44.993 verify_dump=1 00:12:44.993 verify_backlog=512 00:12:44.993 verify_state_save=0 00:12:44.993 do_verify=1 00:12:44.993 verify=crc32c-intel 00:12:44.993 [job0] 00:12:44.993 filename=/dev/nvme0n1 00:12:44.993 [job1] 00:12:44.993 filename=/dev/nvme0n2 00:12:44.993 [job2] 00:12:44.993 filename=/dev/nvme0n3 00:12:44.993 [job3] 00:12:44.993 filename=/dev/nvme0n4 00:12:44.993 Could not set queue depth (nvme0n1) 00:12:44.993 Could not set queue depth (nvme0n2) 00:12:44.993 Could not set queue depth (nvme0n3) 00:12:44.993 Could not set queue depth (nvme0n4) 00:12:44.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:44.993 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:44.993 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:44.993 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:44.993 fio-3.35 00:12:44.993 Starting 4 threads 00:12:46.370 00:12:46.370 job0: (groupid=0, jobs=1): err= 0: pid=1121193: Thu Apr 18 13:41:48 2024 00:12:46.370 read: IOPS=9253, BW=36.1MiB/s (37.9MB/s)(36.2MiB/1002msec) 00:12:46.370 slat (usec): min=3, max=1117, avg=50.89, stdev=177.84 00:12:46.370 clat (usec): min=724, max=7464, avg=6835.32, stdev=467.95 00:12:46.370 lat (usec): min=1751, max=7469, avg=6886.21, stdev=434.17 00:12:46.370 clat percentiles (usec): 00:12:46.370 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6652], 00:12:46.370 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7046], 00:12:46.370 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7242], 00:12:46.370 | 99.00th=[ 7308], 99.50th=[ 7308], 99.90th=[ 7439], 99.95th=[ 7439], 00:12:46.370 | 99.99th=[ 7439] 00:12:46.370 write: IOPS=9708, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec); 0 zone resets 00:12:46.370 slat (usec): min=4, max=1984, avg=48.28, stdev=166.70 00:12:46.370 clat (usec): min=4251, max=8517, avg=6530.92, stdev=367.39 00:12:46.370 lat (usec): min=4256, max=8539, avg=6579.20, stdev=332.62 00:12:46.370 clat percentiles (usec): 00:12:46.370 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6390], 00:12:46.370 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6587], 60.00th=[ 6652], 00:12:46.370 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6915], 95.00th=[ 6980], 00:12:46.370 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 7570], 99.95th=[ 7635], 00:12:46.370 | 99.99th=[ 8455] 00:12:46.370 bw ( KiB/s): min=37856, max=39392, per=36.21%, avg=38624.00, stdev=1086.12, samples=2 00:12:46.370 iops : min= 9464, max= 9848, avg=9656.00, stdev=271.53, samples=2 00:12:46.370 lat (usec) : 750=0.01% 00:12:46.370 lat (msec) : 2=0.05%, 4=0.19%, 10=99.75% 00:12:46.370 cpu : usr=7.99%, sys=13.69%, ctx=1184, majf=0, minf=9 00:12:46.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:46.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.370 issued rwts: total=9272,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.370 job1: (groupid=0, jobs=1): err= 0: pid=1121194: Thu Apr 18 13:41:48 2024 00:12:46.370 read: IOPS=8282, BW=32.4MiB/s (33.9MB/s)(32.4MiB/1002msec) 00:12:46.370 slat (usec): min=3, max=1892, avg=56.81, stdev=200.39 00:12:46.370 clat (usec): min=1026, max=19717, avg=7439.85, stdev=2251.52 00:12:46.370 lat (usec): min=1777, max=19727, avg=7496.65, stdev=2264.43 00:12:46.370 clat percentiles (usec): 00:12:46.370 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 6652], 00:12:46.370 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6849], 60.00th=[ 6915], 00:12:46.370 | 70.00th=[ 6980], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[13435], 00:12:46.370 | 99.00th=[17957], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:12:46.370 | 99.99th=[19792] 00:12:46.370 write: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec); 0 zone resets 00:12:46.370 slat (usec): min=4, max=1854, avg=55.07, stdev=188.39 00:12:46.370 clat (usec): min=5335, max=20365, avg=7466.30, stdev=2810.69 00:12:46.370 lat (usec): min=6105, max=20875, avg=7521.36, stdev=2827.53 00:12:46.370 clat percentiles (usec): 00:12:46.370 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6390], 00:12:46.370 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6587], 00:12:46.370 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[12125], 95.00th=[14222], 00:12:46.370 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:12:46.370 | 99.99th=[20317] 00:12:46.370 bw ( KiB/s): min=30320, max=39144, per=32.57%, avg=34732.00, stdev=6239.51, samples=2 00:12:46.371 iops : min= 7580, max= 9786, avg=8683.00, stdev=1559.88, samples=2 00:12:46.371 lat (msec) : 2=0.09%, 4=0.22%, 10=89.33%, 20=10.36%, 50=0.01% 00:12:46.371 cpu : usr=7.49%, sys=9.79%, ctx=1170, majf=0, minf=11 00:12:46.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:46.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.371 issued rwts: total=8299,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.371 job2: (groupid=0, jobs=1): err= 0: pid=1121195: Thu Apr 18 13:41:48 2024 00:12:46.371 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:12:46.371 slat (usec): min=3, max=3952, avg=150.39, stdev=403.18 00:12:46.371 clat (usec): min=15187, max=28570, avg=19504.41, stdev=1563.16 00:12:46.371 lat (usec): min=16405, max=28588, avg=19654.80, stdev=1562.87 00:12:46.371 clat percentiles (usec): 00:12:46.371 | 1.00th=[16909], 5.00th=[17695], 10.00th=[17957], 20.00th=[18482], 00:12:46.371 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:12:46.371 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[23200], 00:12:46.371 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[28443], 00:12:46.371 | 99.99th=[28443] 00:12:46.371 write: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1004msec); 0 zone resets 00:12:46.371 slat (usec): min=4, max=4032, avg=147.52, stdev=405.29 00:12:46.371 clat (usec): min=1943, max=27457, avg=18990.40, stdev=2563.06 00:12:46.371 lat (usec): min=3655, max=27473, avg=19137.92, stdev=2565.43 00:12:46.371 clat percentiles (usec): 00:12:46.371 | 1.00th=[ 8848], 5.00th=[16319], 10.00th=[17171], 20.00th=[17695], 00:12:46.371 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:12:46.371 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20579], 95.00th=[24511], 00:12:46.371 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:12:46.371 | 99.99th=[27395] 00:12:46.371 bw ( KiB/s): min=12744, max=14352, per=12.70%, avg=13548.00, stdev=1137.03, samples=2 00:12:46.371 iops : min= 3186, max= 3588, avg=3387.00, stdev=284.26, samples=2 00:12:46.371 lat (msec) : 2=0.02%, 4=0.02%, 10=0.73%, 20=78.17%, 50=21.07% 00:12:46.371 cpu : usr=2.09%, sys=5.88%, ctx=863, majf=0, minf=19 00:12:46.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:46.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.371 issued rwts: total=3072,3515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.371 job3: (groupid=0, jobs=1): err= 0: pid=1121196: Thu Apr 18 13:41:48 2024 00:12:46.371 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:46.371 slat (usec): min=3, max=4425, avg=104.47, stdev=331.31 00:12:46.371 clat (usec): min=7074, max=22735, avg=13835.20, stdev=2574.04 00:12:46.371 lat (usec): min=7096, max=22869, avg=13939.67, stdev=2594.92 00:12:46.371 clat percentiles (usec): 00:12:46.371 | 1.00th=[ 7898], 5.00th=[ 8356], 10.00th=[ 8455], 20.00th=[13435], 00:12:46.371 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:12:46.371 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15664], 95.00th=[16450], 00:12:46.371 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20055], 99.95th=[22152], 00:12:46.371 | 99.99th=[22676] 00:12:46.371 write: IOPS=4808, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1003msec); 0 zone resets 00:12:46.371 slat (usec): min=4, max=2528, avg=100.61, stdev=293.20 00:12:46.371 clat (usec): min=2748, max=20395, avg=13114.41, stdev=3154.06 00:12:46.371 lat (usec): min=3460, max=21223, avg=13215.02, stdev=3177.51 00:12:46.371 clat percentiles (usec): 00:12:46.371 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:12:46.371 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:12:46.371 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15795], 95.00th=[17957], 00:12:46.371 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:12:46.371 | 99.99th=[20317] 00:12:46.371 bw ( KiB/s): min=17088, max=20521, per=17.63%, avg=18804.50, stdev=2427.50, samples=2 00:12:46.371 iops : min= 4272, max= 5130, avg=4701.00, stdev=606.70, samples=2 00:12:46.371 lat (msec) : 4=0.17%, 10=19.26%, 20=80.48%, 50=0.10% 00:12:46.371 cpu : usr=2.69%, sys=7.39%, ctx=958, majf=0, minf=13 00:12:46.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:46.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.371 issued rwts: total=4608,4823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.371 00:12:46.371 Run status group 0 (all jobs): 00:12:46.371 READ: bw=98.2MiB/s (103MB/s), 12.0MiB/s-36.1MiB/s (12.5MB/s-37.9MB/s), io=98.6MiB (103MB), run=1002-1004msec 00:12:46.371 WRITE: bw=104MiB/s (109MB/s), 13.7MiB/s-37.9MiB/s (14.3MB/s-39.8MB/s), io=105MiB (110MB), run=1002-1004msec 00:12:46.371 00:12:46.371 Disk stats (read/write): 00:12:46.371 nvme0n1: ios=7590/7680, merge=0/0, ticks=16278/15489, in_queue=31767, util=81.06% 00:12:46.371 nvme0n2: ios=6497/6656, merge=0/0, ticks=15927/16040, in_queue=31967, util=81.70% 00:12:46.371 nvme0n3: ios=2560/2676, merge=0/0, ticks=12244/12614, in_queue=24858, util=86.85% 00:12:46.371 nvme0n4: ios=3584/4078, merge=0/0, ticks=15549/17107, in_queue=32656, util=88.98% 00:12:46.371 13:41:48 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:46.371 [global] 00:12:46.371 thread=1 00:12:46.371 invalidate=1 00:12:46.371 rw=randwrite 00:12:46.371 time_based=1 00:12:46.371 runtime=1 00:12:46.371 ioengine=libaio 00:12:46.371 direct=1 00:12:46.371 bs=4096 00:12:46.371 iodepth=128 00:12:46.371 norandommap=0 00:12:46.371 numjobs=1 00:12:46.371 00:12:46.371 verify_dump=1 00:12:46.371 verify_backlog=512 00:12:46.371 verify_state_save=0 00:12:46.371 do_verify=1 00:12:46.371 verify=crc32c-intel 00:12:46.371 [job0] 00:12:46.371 filename=/dev/nvme0n1 00:12:46.371 [job1] 00:12:46.371 filename=/dev/nvme0n2 00:12:46.371 [job2] 00:12:46.371 filename=/dev/nvme0n3 00:12:46.371 [job3] 00:12:46.371 filename=/dev/nvme0n4 00:12:46.371 Could not set queue depth (nvme0n1) 00:12:46.371 Could not set queue depth (nvme0n2) 00:12:46.371 Could not set queue depth (nvme0n3) 00:12:46.371 Could not set queue depth (nvme0n4) 00:12:46.628 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:46.628 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:46.628 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:46.628 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:46.628 fio-3.35 00:12:46.628 Starting 4 threads 00:12:47.998 00:12:47.998 job0: (groupid=0, jobs=1): err= 0: pid=1121538: Thu Apr 18 13:41:50 2024 00:12:47.998 read: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec) 00:12:47.998 slat (usec): min=3, max=1679, avg=54.40, stdev=184.98 00:12:47.998 clat (usec): min=5489, max=15085, avg=7357.26, stdev=1766.74 00:12:47.998 lat (usec): min=5494, max=15736, avg=7411.66, stdev=1779.36 00:12:47.998 clat percentiles (usec): 00:12:47.998 | 1.00th=[ 5997], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6718], 00:12:47.998 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:12:47.998 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[13829], 00:12:47.998 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:12:47.998 | 99.99th=[15139] 00:12:47.998 write: IOPS=8986, BW=35.1MiB/s (36.8MB/s)(35.3MiB/1005msec); 0 zone resets 00:12:47.998 slat (usec): min=4, max=3409, avg=52.99, stdev=182.19 00:12:47.998 clat (usec): min=1610, max=17198, avg=7029.93, stdev=2002.70 00:12:47.998 lat (usec): min=1621, max=17863, avg=7082.92, stdev=2018.93 00:12:47.998 clat percentiles (usec): 00:12:47.998 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6259], 00:12:47.999 | 30.00th=[ 6325], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:12:47.999 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7242], 95.00th=[13173], 00:12:47.999 | 99.00th=[13960], 99.50th=[14222], 99.90th=[16450], 99.95th=[17171], 00:12:47.999 | 99.99th=[17171] 00:12:47.999 bw ( KiB/s): min=31296, max=39936, per=34.92%, avg=35616.00, stdev=6109.40, samples=2 00:12:47.999 iops : min= 7824, max= 9984, avg=8904.00, stdev=1527.35, samples=2 00:12:47.999 lat (msec) : 2=0.06%, 4=0.19%, 10=91.99%, 20=7.76% 00:12:47.999 cpu : usr=5.88%, sys=11.25%, ctx=1211, majf=0, minf=1 00:12:47.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:47.999 issued rwts: total=8704,9031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:47.999 job1: (groupid=0, jobs=1): err= 0: pid=1121539: Thu Apr 18 13:41:50 2024 00:12:47.999 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:12:47.999 slat (usec): min=3, max=3386, avg=135.93, stdev=366.36 00:12:47.999 clat (usec): min=11202, max=21100, avg=17524.96, stdev=1975.90 00:12:47.999 lat (usec): min=11210, max=21546, avg=17660.89, stdev=1981.14 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[13042], 5.00th=[13566], 10.00th=[14091], 20.00th=[16057], 00:12:47.999 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:12:47.999 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:12:47.999 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:12:47.999 | 99.99th=[21103] 00:12:47.999 write: IOPS=3767, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1006msec); 0 zone resets 00:12:47.999 slat (usec): min=4, max=3196, avg=129.20, stdev=354.25 00:12:47.999 clat (usec): min=3969, max=20879, avg=16880.79, stdev=2189.81 00:12:47.999 lat (usec): min=4888, max=20896, avg=17009.99, stdev=2197.73 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[ 8029], 5.00th=[12911], 10.00th=[13304], 20.00th=[16057], 00:12:47.999 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:12:47.999 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[19006], 00:12:47.999 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:12:47.999 | 99.99th=[20841] 00:12:47.999 bw ( KiB/s): min=14128, max=15206, per=14.38%, avg=14667.00, stdev=762.26, samples=2 00:12:47.999 iops : min= 3532, max= 3801, avg=3666.50, stdev=190.21, samples=2 00:12:47.999 lat (msec) : 4=0.01%, 10=0.81%, 20=97.45%, 50=1.72% 00:12:47.999 cpu : usr=2.29%, sys=6.37%, ctx=965, majf=0, minf=1 00:12:47.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:47.999 issued rwts: total=3584,3790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:47.999 job2: (groupid=0, jobs=1): err= 0: pid=1121540: Thu Apr 18 13:41:50 2024 00:12:47.999 read: IOPS=7555, BW=29.5MiB/s (30.9MB/s)(29.6MiB/1002msec) 00:12:47.999 slat (usec): min=3, max=1488, avg=65.46, stdev=240.96 00:12:47.999 clat (usec): min=609, max=9531, avg=8577.18, stdev=599.93 00:12:47.999 lat (usec): min=1932, max=9536, avg=8642.64, stdev=550.48 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[ 6325], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:12:47.999 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8717], 00:12:47.999 | 70.00th=[ 8848], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9241], 00:12:47.999 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[ 9503], 99.95th=[ 9503], 00:12:47.999 | 99.99th=[ 9503] 00:12:47.999 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:12:47.999 slat (usec): min=4, max=2095, avg=60.38, stdev=220.34 00:12:47.999 clat (usec): min=6465, max=9616, avg=8059.49, stdev=356.36 00:12:47.999 lat (usec): min=6628, max=9625, avg=8119.87, stdev=284.16 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7832], 00:12:47.999 | 30.00th=[ 7898], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8160], 00:12:47.999 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8455], 95.00th=[ 8586], 00:12:47.999 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[ 9110], 99.95th=[ 9241], 00:12:47.999 | 99.99th=[ 9634] 00:12:47.999 bw ( KiB/s): min=29208, max=32232, per=30.12%, avg=30720.00, stdev=2138.29, samples=2 00:12:47.999 iops : min= 7302, max= 8058, avg=7680.00, stdev=534.57, samples=2 00:12:47.999 lat (usec) : 750=0.01% 00:12:47.999 lat (msec) : 2=0.06%, 4=0.14%, 10=99.79% 00:12:47.999 cpu : usr=5.19%, sys=9.69%, ctx=954, majf=0, minf=1 00:12:47.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:47.999 issued rwts: total=7571,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:47.999 job3: (groupid=0, jobs=1): err= 0: pid=1121541: Thu Apr 18 13:41:50 2024 00:12:47.999 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:47.999 slat (usec): min=3, max=2531, avg=96.42, stdev=283.29 00:12:47.999 clat (usec): min=3514, max=15859, avg=12605.83, stdev=2162.69 00:12:47.999 lat (usec): min=3519, max=15871, avg=12702.25, stdev=2173.02 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[ 6652], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[10028], 00:12:47.999 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:12:47.999 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14615], 00:12:47.999 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15533], 99.95th=[15795], 00:12:47.999 | 99.99th=[15795] 00:12:47.999 write: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1002msec); 0 zone resets 00:12:47.999 slat (usec): min=4, max=2049, avg=91.66, stdev=268.12 00:12:47.999 clat (usec): min=1091, max=15532, avg=12073.32, stdev=2122.15 00:12:47.999 lat (usec): min=1096, max=15540, avg=12164.98, stdev=2131.89 00:12:47.999 clat percentiles (usec): 00:12:47.999 | 1.00th=[ 7373], 5.00th=[ 8160], 10.00th=[ 8291], 20.00th=[ 8848], 00:12:47.999 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:12:47.999 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13829], 95.00th=[13960], 00:12:47.999 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15008], 99.95th=[15270], 00:12:47.999 | 99.99th=[15533] 00:12:47.999 bw ( KiB/s): min=20480, max=20480, per=20.08%, avg=20480.00, stdev= 0.00, samples=2 00:12:47.999 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:47.999 lat (msec) : 2=0.04%, 4=0.28%, 10=19.91%, 20=79.77% 00:12:47.999 cpu : usr=5.19%, sys=6.99%, ctx=984, majf=0, minf=1 00:12:47.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:47.999 issued rwts: total=5120,5147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:47.999 00:12:47.999 Run status group 0 (all jobs): 00:12:47.999 READ: bw=97.0MiB/s (102MB/s), 13.9MiB/s-33.8MiB/s (14.6MB/s-35.5MB/s), io=97.6MiB (102MB), run=1002-1006msec 00:12:47.999 WRITE: bw=99.6MiB/s (104MB/s), 14.7MiB/s-35.1MiB/s (15.4MB/s-36.8MB/s), io=100MiB (105MB), run=1002-1006msec 00:12:47.999 00:12:47.999 Disk stats (read/write): 00:12:47.999 nvme0n1: ios=7869/8192, merge=0/0, ticks=52668/51392, in_queue=104060, util=85.17% 00:12:47.999 nvme0n2: ios=2832/3072, merge=0/0, ticks=12697/13488, in_queue=26185, util=85.85% 00:12:47.999 nvme0n3: ios=6206/6656, merge=0/0, ticks=17120/16715, in_queue=33835, util=88.55% 00:12:47.999 nvme0n4: ios=3907/4096, merge=0/0, ticks=17283/17308, in_queue=34591, util=89.49% 00:12:47.999 13:41:50 -- target/fio.sh@55 -- # sync 00:12:47.999 13:41:50 -- target/fio.sh@59 -- # fio_pid=1121679 00:12:47.999 13:41:50 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:47.999 13:41:50 -- target/fio.sh@61 -- # sleep 3 00:12:47.999 [global] 00:12:47.999 thread=1 00:12:47.999 invalidate=1 00:12:47.999 rw=read 00:12:47.999 time_based=1 00:12:47.999 runtime=10 00:12:47.999 ioengine=libaio 00:12:47.999 direct=1 00:12:47.999 bs=4096 00:12:47.999 iodepth=1 00:12:47.999 norandommap=1 00:12:47.999 numjobs=1 00:12:47.999 00:12:47.999 [job0] 00:12:47.999 filename=/dev/nvme0n1 00:12:47.999 [job1] 00:12:47.999 filename=/dev/nvme0n2 00:12:47.999 [job2] 00:12:47.999 filename=/dev/nvme0n3 00:12:47.999 [job3] 00:12:47.999 filename=/dev/nvme0n4 00:12:47.999 Could not set queue depth (nvme0n1) 00:12:47.999 Could not set queue depth (nvme0n2) 00:12:47.999 Could not set queue depth (nvme0n3) 00:12:47.999 Could not set queue depth (nvme0n4) 00:12:47.999 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:47.999 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:47.999 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:47.999 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:47.999 fio-3.35 00:12:47.999 Starting 4 threads 00:12:51.287 13:41:53 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:51.287 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=82001920, buflen=4096 00:12:51.287 fio: pid=1121776, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:51.287 13:41:53 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:51.287 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=87912448, buflen=4096 00:12:51.287 fio: pid=1121775, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:51.287 13:41:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:51.287 13:41:54 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:51.852 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=19271680, buflen=4096 00:12:51.852 fio: pid=1121773, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:51.852 13:41:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:51.852 13:41:54 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:52.111 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=30261248, buflen=4096 00:12:52.111 fio: pid=1121774, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:52.111 00:12:52.111 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121773: Thu Apr 18 13:41:54 2024 00:12:52.111 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(146MiB/3639msec) 00:12:52.111 slat (usec): min=4, max=12112, avg= 8.25, stdev=111.32 00:12:52.111 clat (usec): min=57, max=369, avg=87.23, stdev=20.69 00:12:52.111 lat (usec): min=63, max=12195, avg=95.48, stdev=113.27 00:12:52.111 clat percentiles (usec): 00:12:52.111 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:12:52.111 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 82], 00:12:52.111 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 119], 95.00th=[ 131], 00:12:52.111 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 194], 99.95th=[ 206], 00:12:52.111 | 99.99th=[ 233] 00:12:52.111 bw ( KiB/s): min=29684, max=45608, per=35.00%, avg=41211.57, stdev=5373.40, samples=7 00:12:52.111 iops : min= 7421, max=11402, avg=10302.86, stdev=1343.34, samples=7 00:12:52.111 lat (usec) : 100=79.51%, 250=20.48%, 500=0.01% 00:12:52.111 cpu : usr=3.52%, sys=9.29%, ctx=37479, majf=0, minf=1 00:12:52.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 issued rwts: total=37474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.111 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121774: Thu Apr 18 13:41:54 2024 00:12:52.111 read: IOPS=9925, BW=38.8MiB/s (40.7MB/s)(157MiB/4046msec) 00:12:52.111 slat (usec): min=4, max=12877, avg= 7.88, stdev=123.23 00:12:52.111 clat (usec): min=56, max=416, avg=91.49, stdev=27.05 00:12:52.111 lat (usec): min=61, max=13062, avg=99.37, stdev=126.42 00:12:52.111 clat percentiles (usec): 00:12:52.111 | 1.00th=[ 61], 5.00th=[ 65], 10.00th=[ 70], 20.00th=[ 74], 00:12:52.111 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 84], 00:12:52.111 | 70.00th=[ 96], 80.00th=[ 117], 90.00th=[ 133], 95.00th=[ 149], 00:12:52.111 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 208], 00:12:52.111 | 99.99th=[ 233] 00:12:52.111 bw ( KiB/s): min=29940, max=45624, per=32.72%, avg=38534.00, stdev=6846.07, samples=7 00:12:52.111 iops : min= 7485, max=11406, avg=9633.43, stdev=1711.57, samples=7 00:12:52.111 lat (usec) : 100=73.26%, 250=26.73%, 500=0.01% 00:12:52.111 cpu : usr=3.44%, sys=9.30%, ctx=40169, majf=0, minf=1 00:12:52.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 issued rwts: total=40157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.111 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121775: Thu Apr 18 13:41:54 2024 00:12:52.111 read: IOPS=6612, BW=25.8MiB/s (27.1MB/s)(83.8MiB/3246msec) 00:12:52.111 slat (usec): min=4, max=7933, avg= 8.90, stdev=76.05 00:12:52.111 clat (usec): min=76, max=539, avg=140.58, stdev=21.27 00:12:52.111 lat (usec): min=84, max=8048, avg=149.48, stdev=78.73 00:12:52.111 clat percentiles (usec): 00:12:52.111 | 1.00th=[ 85], 5.00th=[ 94], 10.00th=[ 123], 20.00th=[ 130], 00:12:52.111 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:12:52.111 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 174], 00:12:52.111 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 223], 99.95th=[ 233], 00:12:52.111 | 99.99th=[ 269] 00:12:52.111 bw ( KiB/s): min=25080, max=28056, per=22.40%, avg=26377.67, stdev=1388.86, samples=6 00:12:52.111 iops : min= 6270, max= 7014, avg=6594.33, stdev=347.20, samples=6 00:12:52.111 lat (usec) : 100=6.34%, 250=93.64%, 500=0.02%, 750=0.01% 00:12:52.111 cpu : usr=2.77%, sys=7.18%, ctx=21467, majf=0, minf=1 00:12:52.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.111 issued rwts: total=21464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.111 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121776: Thu Apr 18 13:41:54 2024 00:12:52.111 read: IOPS=6958, BW=27.2MiB/s (28.5MB/s)(78.2MiB/2877msec) 00:12:52.111 slat (nsec): min=4862, max=36207, avg=6679.06, stdev=1577.03 00:12:52.111 clat (usec): min=74, max=353, avg=135.54, stdev=35.14 00:12:52.111 lat (usec): min=80, max=359, avg=142.22, stdev=35.45 00:12:52.111 clat percentiles (usec): 00:12:52.111 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 97], 00:12:52.111 | 30.00th=[ 117], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 143], 00:12:52.111 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 182], 95.00th=[ 194], 00:12:52.111 | 99.00th=[ 225], 99.50th=[ 235], 99.90th=[ 273], 99.95th=[ 277], 00:12:52.111 | 99.99th=[ 285] 00:12:52.111 bw ( KiB/s): min=24936, max=28368, per=22.14%, avg=26072.40, stdev=1430.07, samples=5 00:12:52.111 iops : min= 6234, max= 7092, avg=6518.00, stdev=357.48, samples=5 00:12:52.111 lat (usec) : 100=23.02%, 250=76.65%, 500=0.32% 00:12:52.111 cpu : usr=2.75%, sys=6.26%, ctx=20021, majf=0, minf=1 00:12:52.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.112 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.112 issued rwts: total=20021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.112 00:12:52.112 Run status group 0 (all jobs): 00:12:52.112 READ: bw=115MiB/s (121MB/s), 25.8MiB/s-40.2MiB/s (27.1MB/s-42.2MB/s), io=465MiB (488MB), run=2877-4046msec 00:12:52.112 00:12:52.112 Disk stats (read/write): 00:12:52.112 nvme0n1: ios=36944/0, merge=0/0, ticks=3301/0, in_queue=3301, util=94.69% 00:12:52.112 nvme0n2: ios=37470/0, merge=0/0, ticks=3580/0, in_queue=3580, util=95.10% 00:12:52.112 nvme0n3: ios=20372/0, merge=0/0, ticks=2915/0, in_queue=2915, util=96.18% 00:12:52.112 nvme0n4: ios=19600/0, merge=0/0, ticks=2707/0, in_queue=2707, util=96.69% 00:12:52.112 13:41:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:52.112 13:41:54 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:52.676 13:41:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:52.676 13:41:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:52.932 13:41:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:52.932 13:41:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:53.328 13:41:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:53.328 13:41:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:53.585 13:41:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:53.585 13:41:56 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:54.150 13:41:56 -- target/fio.sh@69 -- # fio_status=0 00:12:54.150 13:41:56 -- target/fio.sh@70 -- # wait 1121679 00:12:54.150 13:41:56 -- target/fio.sh@70 -- # fio_status=4 00:12:54.150 13:41:56 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.080 13:41:57 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.080 13:41:57 -- common/autotest_common.sh@1205 -- # local i=0 00:12:55.080 13:41:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:55.080 13:41:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.080 13:41:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:55.080 13:41:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.080 13:41:57 -- common/autotest_common.sh@1217 -- # return 0 00:12:55.080 13:41:57 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:55.080 13:41:57 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:55.080 nvmf hotplug test: fio failed as expected 00:12:55.080 13:41:57 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.645 13:41:58 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:55.645 13:41:58 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:55.645 13:41:58 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:55.645 13:41:58 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:55.645 13:41:58 -- target/fio.sh@91 -- # nvmftestfini 00:12:55.645 13:41:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:55.645 13:41:58 -- nvmf/common.sh@117 -- # sync 00:12:55.645 13:41:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:55.645 13:41:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:55.645 13:41:58 -- nvmf/common.sh@120 -- # set +e 00:12:55.645 13:41:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.645 13:41:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:55.645 rmmod nvme_rdma 00:12:55.645 rmmod nvme_fabrics 00:12:55.645 13:41:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.645 13:41:58 -- nvmf/common.sh@124 -- # set -e 00:12:55.645 13:41:58 -- nvmf/common.sh@125 -- # return 0 00:12:55.645 13:41:58 -- nvmf/common.sh@478 -- # '[' -n 1119388 ']' 00:12:55.645 13:41:58 -- nvmf/common.sh@479 -- # killprocess 1119388 00:12:55.645 13:41:58 -- common/autotest_common.sh@936 -- # '[' -z 1119388 ']' 00:12:55.645 13:41:58 -- common/autotest_common.sh@940 -- # kill -0 1119388 00:12:55.645 13:41:58 -- common/autotest_common.sh@941 -- # uname 00:12:55.645 13:41:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.645 13:41:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1119388 00:12:55.645 13:41:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:55.645 13:41:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:55.645 13:41:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1119388' 00:12:55.645 killing process with pid 1119388 00:12:55.645 13:41:58 -- common/autotest_common.sh@955 -- # kill 1119388 00:12:55.645 13:41:58 -- common/autotest_common.sh@960 -- # wait 1119388 00:12:55.903 13:41:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:55.903 13:41:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:55.903 00:12:55.903 real 0m26.391s 00:12:55.903 user 1m44.951s 00:12:55.903 sys 0m6.762s 00:12:55.903 13:41:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.903 13:41:58 -- common/autotest_common.sh@10 -- # set +x 00:12:55.903 ************************************ 00:12:55.903 END TEST nvmf_fio_target 00:12:55.903 ************************************ 00:12:55.903 13:41:58 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:55.903 13:41:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.903 13:41:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.903 13:41:58 -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 ************************************ 00:12:56.160 START TEST nvmf_bdevio 00:12:56.160 ************************************ 00:12:56.160 13:41:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:56.160 * Looking for test storage... 00:12:56.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:56.161 13:41:58 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.161 13:41:58 -- nvmf/common.sh@7 -- # uname -s 00:12:56.161 13:41:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.161 13:41:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.161 13:41:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.161 13:41:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.161 13:41:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.161 13:41:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.161 13:41:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.161 13:41:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.161 13:41:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.161 13:41:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.161 13:41:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:12:56.161 13:41:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:12:56.161 13:41:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.161 13:41:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.161 13:41:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.161 13:41:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.161 13:41:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:56.161 13:41:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.161 13:41:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.161 13:41:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.161 13:41:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.161 13:41:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.161 13:41:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.161 13:41:58 -- paths/export.sh@5 -- # export PATH 00:12:56.161 13:41:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.161 13:41:58 -- nvmf/common.sh@47 -- # : 0 00:12:56.161 13:41:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.161 13:41:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.161 13:41:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.161 13:41:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.161 13:41:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.161 13:41:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.161 13:41:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.161 13:41:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.161 13:41:58 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.161 13:41:58 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.161 13:41:58 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:56.161 13:41:58 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:56.161 13:41:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.161 13:41:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:56.161 13:41:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:56.161 13:41:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:56.161 13:41:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.161 13:41:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.161 13:41:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.161 13:41:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:56.161 13:41:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:56.161 13:41:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:56.161 13:41:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.442 13:42:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:59.442 13:42:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.442 13:42:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.442 13:42:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.442 13:42:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.442 13:42:01 -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.442 13:42:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@296 -- # e810=() 00:12:59.442 13:42:01 -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.442 13:42:01 -- nvmf/common.sh@297 -- # x722=() 00:12:59.442 13:42:01 -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.442 13:42:01 -- nvmf/common.sh@298 -- # mlx=() 00:12:59.442 13:42:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.442 13:42:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.442 13:42:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:12:59.442 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:12:59.442 13:42:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.442 13:42:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:12:59.442 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:12:59.442 13:42:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.442 13:42:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.442 13:42:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.442 13:42:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:12:59.442 Found net devices under 0000:81:00.0: mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.442 13:42:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.442 13:42:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:12:59.442 Found net devices under 0000:81:00.1: mlx_0_1 00:12:59.442 13:42:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.442 13:42:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:59.442 13:42:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:59.442 13:42:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:59.442 13:42:01 -- nvmf/common.sh@58 -- # uname 00:12:59.442 13:42:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:59.442 13:42:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:59.442 13:42:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:59.442 13:42:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:59.442 13:42:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:59.442 13:42:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:59.442 13:42:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:59.442 13:42:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:59.442 13:42:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:59.442 13:42:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.442 13:42:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:59.442 13:42:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.442 13:42:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.442 13:42:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@105 -- # continue 2 00:12:59.442 13:42:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.442 13:42:01 -- nvmf/common.sh@105 -- # continue 2 00:12:59.442 13:42:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.442 13:42:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.442 13:42:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:59.442 13:42:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:59.442 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.442 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:12:59.442 altname enp129s0f0np0 00:12:59.442 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.442 valid_lft forever preferred_lft forever 00:12:59.442 13:42:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.442 13:42:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:59.442 13:42:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.442 13:42:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.442 13:42:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:59.442 13:42:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:59.442 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.442 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:12:59.442 altname enp129s0f1np1 00:12:59.442 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.442 valid_lft forever preferred_lft forever 00:12:59.442 13:42:01 -- nvmf/common.sh@411 -- # return 0 00:12:59.442 13:42:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:59.442 13:42:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.442 13:42:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:59.442 13:42:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:59.442 13:42:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.442 13:42:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.442 13:42:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.442 13:42:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.442 13:42:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.442 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.442 13:42:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.442 13:42:01 -- nvmf/common.sh@105 -- # continue 2 00:12:59.443 13:42:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.443 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.443 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.443 13:42:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.443 13:42:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.443 13:42:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.443 13:42:01 -- nvmf/common.sh@105 -- # continue 2 00:12:59.443 13:42:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.443 13:42:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:59.443 13:42:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.443 13:42:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.443 13:42:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:59.443 13:42:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.443 13:42:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.443 13:42:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.443 192.168.100.9' 00:12:59.443 13:42:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:59.443 192.168.100.9' 00:12:59.443 13:42:01 -- nvmf/common.sh@446 -- # head -n 1 00:12:59.443 13:42:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.443 13:42:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:59.443 192.168.100.9' 00:12:59.443 13:42:01 -- nvmf/common.sh@447 -- # head -n 1 00:12:59.443 13:42:01 -- nvmf/common.sh@447 -- # tail -n +2 00:12:59.443 13:42:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.443 13:42:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:59.443 13:42:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.443 13:42:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:59.443 13:42:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:59.443 13:42:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:59.443 13:42:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:59.443 13:42:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:59.443 13:42:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:59.443 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:12:59.443 13:42:01 -- nvmf/common.sh@470 -- # nvmfpid=1124850 00:12:59.443 13:42:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:59.443 13:42:01 -- nvmf/common.sh@471 -- # waitforlisten 1124850 00:12:59.443 13:42:01 -- common/autotest_common.sh@817 -- # '[' -z 1124850 ']' 00:12:59.443 13:42:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.443 13:42:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:59.443 13:42:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.443 13:42:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:59.443 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:12:59.443 [2024-04-18 13:42:01.729322] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:59.443 [2024-04-18 13:42:01.729417] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.443 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.443 [2024-04-18 13:42:01.815300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.443 [2024-04-18 13:42:01.939271] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.443 [2024-04-18 13:42:01.939335] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.443 [2024-04-18 13:42:01.939351] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.443 [2024-04-18 13:42:01.939365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.443 [2024-04-18 13:42:01.939376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.443 [2024-04-18 13:42:01.939464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:59.443 [2024-04-18 13:42:01.939741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:59.443 [2024-04-18 13:42:01.939797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:59.443 [2024-04-18 13:42:01.939801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.443 13:42:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.443 13:42:02 -- common/autotest_common.sh@850 -- # return 0 00:12:59.443 13:42:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:59.443 13:42:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:59.443 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.443 13:42:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.443 13:42:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:59.443 13:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.443 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.443 [2024-04-18 13:42:02.123220] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x230e970/0x2312e60) succeed. 00:12:59.443 [2024-04-18 13:42:02.135305] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x230ff60/0x23544f0) succeed. 00:12:59.701 13:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.701 13:42:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.701 13:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.701 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 Malloc0 00:12:59.701 13:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.701 13:42:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.701 13:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.701 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 13:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.701 13:42:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.701 13:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.701 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 13:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.701 13:42:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:59.701 13:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.701 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 [2024-04-18 13:42:02.341110] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:59.701 13:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.701 13:42:02 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:59.701 13:42:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:59.701 13:42:02 -- nvmf/common.sh@521 -- # config=() 00:12:59.701 13:42:02 -- nvmf/common.sh@521 -- # local subsystem config 00:12:59.701 13:42:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:59.701 13:42:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:59.701 { 00:12:59.701 "params": { 00:12:59.701 "name": "Nvme$subsystem", 00:12:59.701 "trtype": "$TEST_TRANSPORT", 00:12:59.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:59.701 "adrfam": "ipv4", 00:12:59.701 "trsvcid": "$NVMF_PORT", 00:12:59.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:59.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:59.701 "hdgst": ${hdgst:-false}, 00:12:59.701 "ddgst": ${ddgst:-false} 00:12:59.701 }, 00:12:59.701 "method": "bdev_nvme_attach_controller" 00:12:59.701 } 00:12:59.701 EOF 00:12:59.701 )") 00:12:59.701 13:42:02 -- nvmf/common.sh@543 -- # cat 00:12:59.701 13:42:02 -- nvmf/common.sh@545 -- # jq . 00:12:59.701 13:42:02 -- nvmf/common.sh@546 -- # IFS=, 00:12:59.701 13:42:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:59.701 "params": { 00:12:59.701 "name": "Nvme1", 00:12:59.701 "trtype": "rdma", 00:12:59.701 "traddr": "192.168.100.8", 00:12:59.701 "adrfam": "ipv4", 00:12:59.701 "trsvcid": "4420", 00:12:59.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:59.701 "hdgst": false, 00:12:59.701 "ddgst": false 00:12:59.701 }, 00:12:59.701 "method": "bdev_nvme_attach_controller" 00:12:59.701 }' 00:12:59.701 [2024-04-18 13:42:02.391732] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:59.701 [2024-04-18 13:42:02.391830] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124945 ] 00:12:59.701 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.701 [2024-04-18 13:42:02.486034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.959 [2024-04-18 13:42:02.611575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.959 [2024-04-18 13:42:02.611627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.959 [2024-04-18 13:42:02.611631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.217 I/O targets: 00:13:00.217 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:00.217 00:13:00.217 00:13:00.217 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.217 http://cunit.sourceforge.net/ 00:13:00.217 00:13:00.217 00:13:00.217 Suite: bdevio tests on: Nvme1n1 00:13:00.217 Test: blockdev write read block ...passed 00:13:00.217 Test: blockdev write zeroes read block ...passed 00:13:00.217 Test: blockdev write zeroes read no split ...passed 00:13:00.217 Test: blockdev write zeroes read split ...passed 00:13:00.217 Test: blockdev write zeroes read split partial ...passed 00:13:00.217 Test: blockdev reset ...[2024-04-18 13:42:02.860786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:00.217 [2024-04-18 13:42:02.887839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:13:00.217 [2024-04-18 13:42:02.920596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:00.217 passed 00:13:00.217 Test: blockdev write read 8 blocks ...passed 00:13:00.217 Test: blockdev write read size > 128k ...passed 00:13:00.217 Test: blockdev write read invalid size ...passed 00:13:00.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.217 Test: blockdev write read max offset ...passed 00:13:00.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.217 Test: blockdev writev readv 8 blocks ...passed 00:13:00.217 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.217 Test: blockdev writev readv block ...passed 00:13:00.217 Test: blockdev writev readv size > 128k ...passed 00:13:00.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.217 Test: blockdev comparev and writev ...[2024-04-18 13:42:02.925308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.925368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.925604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.925648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.925899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.925950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.925968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.926173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.217 [2024-04-18 13:42:02.926199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:00.217 [2024-04-18 13:42:02.926217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.218 [2024-04-18 13:42:02.926233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:00.218 passed 00:13:00.218 Test: blockdev nvme passthru rw ...passed 00:13:00.218 Test: blockdev nvme passthru vendor specific ...[2024-04-18 13:42:02.926671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:00.218 [2024-04-18 13:42:02.926698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:00.218 [2024-04-18 13:42:02.926764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:00.218 [2024-04-18 13:42:02.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:00.218 [2024-04-18 13:42:02.926851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:00.218 [2024-04-18 13:42:02.926873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:00.218 [2024-04-18 13:42:02.926934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:00.218 [2024-04-18 13:42:02.926966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:00.218 passed 00:13:00.218 Test: blockdev nvme admin passthru ...passed 00:13:00.218 Test: blockdev copy ...passed 00:13:00.218 00:13:00.218 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.218 suites 1 1 n/a 0 0 00:13:00.218 tests 23 23 23 0 0 00:13:00.218 asserts 152 152 152 0 n/a 00:13:00.218 00:13:00.218 Elapsed time = 0.224 seconds 00:13:00.511 13:42:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.511 13:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.511 13:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:00.511 13:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.511 13:42:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:00.511 13:42:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:00.511 13:42:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:00.511 13:42:03 -- nvmf/common.sh@117 -- # sync 00:13:00.511 13:42:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:00.511 13:42:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:00.511 13:42:03 -- nvmf/common.sh@120 -- # set +e 00:13:00.511 13:42:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.511 13:42:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:00.511 rmmod nvme_rdma 00:13:00.511 rmmod nvme_fabrics 00:13:00.511 13:42:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.511 13:42:03 -- nvmf/common.sh@124 -- # set -e 00:13:00.511 13:42:03 -- nvmf/common.sh@125 -- # return 0 00:13:00.511 13:42:03 -- nvmf/common.sh@478 -- # '[' -n 1124850 ']' 00:13:00.511 13:42:03 -- nvmf/common.sh@479 -- # killprocess 1124850 00:13:00.511 13:42:03 -- common/autotest_common.sh@936 -- # '[' -z 1124850 ']' 00:13:00.511 13:42:03 -- common/autotest_common.sh@940 -- # kill -0 1124850 00:13:00.511 13:42:03 -- common/autotest_common.sh@941 -- # uname 00:13:00.511 13:42:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.511 13:42:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1124850 00:13:00.511 13:42:03 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:00.511 13:42:03 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:00.511 13:42:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1124850' 00:13:00.511 killing process with pid 1124850 00:13:00.511 13:42:03 -- common/autotest_common.sh@955 -- # kill 1124850 00:13:00.511 13:42:03 -- common/autotest_common.sh@960 -- # wait 1124850 00:13:01.081 13:42:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:01.081 13:42:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:01.081 00:13:01.081 real 0m4.911s 00:13:01.081 user 0m9.112s 00:13:01.081 sys 0m2.657s 00:13:01.081 13:42:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:01.081 13:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.081 ************************************ 00:13:01.081 END TEST nvmf_bdevio 00:13:01.081 ************************************ 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:13:01.081 13:42:03 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:13:01.081 13:42:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:01.082 13:42:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.082 13:42:03 -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 ************************************ 00:13:01.082 START TEST nvmf_device_removal 00:13:01.082 ************************************ 00:13:01.082 13:42:03 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:13:01.082 * Looking for test storage... 00:13:01.341 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.341 13:42:03 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:13:01.341 13:42:03 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:01.341 13:42:03 -- common/autotest_common.sh@34 -- # set -e 00:13:01.341 13:42:03 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:01.341 13:42:03 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:01.341 13:42:03 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:13:01.341 13:42:03 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:01.341 13:42:03 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:13:01.341 13:42:03 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:01.341 13:42:03 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:01.341 13:42:03 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:01.341 13:42:03 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:01.341 13:42:03 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:01.341 13:42:03 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:01.341 13:42:03 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:01.341 13:42:03 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:01.341 13:42:03 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:01.341 13:42:03 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:01.341 13:42:03 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:01.341 13:42:03 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:01.341 13:42:03 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:01.341 13:42:03 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:01.341 13:42:03 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:01.341 13:42:03 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:01.341 13:42:03 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:01.341 13:42:03 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:01.341 13:42:03 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:01.341 13:42:03 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:01.341 13:42:03 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:01.341 13:42:03 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:01.341 13:42:03 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:01.342 13:42:03 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:01.342 13:42:03 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:01.342 13:42:03 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:01.342 13:42:03 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:01.342 13:42:03 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:01.342 13:42:03 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:01.342 13:42:03 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:01.342 13:42:03 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:01.342 13:42:03 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:01.342 13:42:03 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:01.342 13:42:03 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:01.342 13:42:03 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:01.342 13:42:03 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:01.342 13:42:03 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:01.342 13:42:03 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:01.342 13:42:03 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:01.342 13:42:03 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:01.342 13:42:03 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:01.342 13:42:03 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:01.342 13:42:03 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:01.342 13:42:03 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:01.342 13:42:03 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:01.342 13:42:03 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:01.342 13:42:03 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:01.342 13:42:03 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:01.342 13:42:03 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:01.342 13:42:03 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:13:01.342 13:42:03 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:13:01.342 13:42:03 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:13:01.342 13:42:03 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:13:01.342 13:42:03 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:13:01.342 13:42:03 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:13:01.342 13:42:03 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:13:01.342 13:42:03 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:13:01.342 13:42:03 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:13:01.342 13:42:03 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:13:01.342 13:42:03 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:13:01.342 13:42:03 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:13:01.342 13:42:03 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:13:01.342 13:42:03 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:01.342 13:42:03 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:13:01.342 13:42:03 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:13:01.342 13:42:03 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:13:01.342 13:42:03 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:13:01.342 13:42:03 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:13:01.342 13:42:03 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:13:01.342 13:42:03 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:13:01.342 13:42:03 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:13:01.342 13:42:03 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:13:01.342 13:42:03 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:13:01.342 13:42:03 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:13:01.342 13:42:03 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:01.342 13:42:03 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:13:01.342 13:42:03 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:13:01.342 13:42:03 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:01.342 13:42:03 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:01.342 13:42:03 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:01.342 13:42:03 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:01.342 13:42:03 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:01.342 13:42:03 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:01.342 13:42:03 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:13:01.342 13:42:03 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:01.342 13:42:03 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:01.342 13:42:03 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:01.342 13:42:03 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:01.342 13:42:03 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:01.342 13:42:03 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:01.342 13:42:03 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:01.342 13:42:03 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:13:01.342 13:42:03 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:01.342 #define SPDK_CONFIG_H 00:13:01.342 #define SPDK_CONFIG_APPS 1 00:13:01.342 #define SPDK_CONFIG_ARCH native 00:13:01.342 #undef SPDK_CONFIG_ASAN 00:13:01.342 #undef SPDK_CONFIG_AVAHI 00:13:01.342 #undef SPDK_CONFIG_CET 00:13:01.342 #define SPDK_CONFIG_COVERAGE 1 00:13:01.342 #define SPDK_CONFIG_CROSS_PREFIX 00:13:01.342 #undef SPDK_CONFIG_CRYPTO 00:13:01.342 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:01.342 #undef SPDK_CONFIG_CUSTOMOCF 00:13:01.342 #undef SPDK_CONFIG_DAOS 00:13:01.342 #define SPDK_CONFIG_DAOS_DIR 00:13:01.342 #define SPDK_CONFIG_DEBUG 1 00:13:01.342 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:01.342 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:01.342 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:01.342 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:01.342 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:01.342 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:01.342 #define SPDK_CONFIG_EXAMPLES 1 00:13:01.342 #undef SPDK_CONFIG_FC 00:13:01.342 #define SPDK_CONFIG_FC_PATH 00:13:01.342 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:01.342 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:01.342 #undef SPDK_CONFIG_FUSE 00:13:01.342 #undef SPDK_CONFIG_FUZZER 00:13:01.342 #define SPDK_CONFIG_FUZZER_LIB 00:13:01.342 #undef SPDK_CONFIG_GOLANG 00:13:01.342 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:01.342 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:01.342 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:01.342 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:13:01.342 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:01.342 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:01.342 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:01.342 #define SPDK_CONFIG_IDXD 1 00:13:01.342 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:01.342 #undef SPDK_CONFIG_IPSEC_MB 00:13:01.342 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:01.342 #define SPDK_CONFIG_ISAL 1 00:13:01.342 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:01.342 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:01.342 #define SPDK_CONFIG_LIBDIR 00:13:01.342 #undef SPDK_CONFIG_LTO 00:13:01.342 #define SPDK_CONFIG_MAX_LCORES 00:13:01.342 #define SPDK_CONFIG_NVME_CUSE 1 00:13:01.342 #undef SPDK_CONFIG_OCF 00:13:01.342 #define SPDK_CONFIG_OCF_PATH 00:13:01.342 #define SPDK_CONFIG_OPENSSL_PATH 00:13:01.342 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:01.342 #define SPDK_CONFIG_PGO_DIR 00:13:01.342 #undef SPDK_CONFIG_PGO_USE 00:13:01.342 #define SPDK_CONFIG_PREFIX /usr/local 00:13:01.342 #undef SPDK_CONFIG_RAID5F 00:13:01.342 #undef SPDK_CONFIG_RBD 00:13:01.342 #define SPDK_CONFIG_RDMA 1 00:13:01.342 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:01.342 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:01.342 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:01.342 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:01.342 #define SPDK_CONFIG_SHARED 1 00:13:01.342 #undef SPDK_CONFIG_SMA 00:13:01.342 #define SPDK_CONFIG_TESTS 1 00:13:01.342 #undef SPDK_CONFIG_TSAN 00:13:01.342 #define SPDK_CONFIG_UBLK 1 00:13:01.342 #define SPDK_CONFIG_UBSAN 1 00:13:01.342 #undef SPDK_CONFIG_UNIT_TESTS 00:13:01.342 #undef SPDK_CONFIG_URING 00:13:01.342 #define SPDK_CONFIG_URING_PATH 00:13:01.342 #undef SPDK_CONFIG_URING_ZNS 00:13:01.342 #undef SPDK_CONFIG_USDT 00:13:01.342 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:01.342 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:01.342 #undef SPDK_CONFIG_VFIO_USER 00:13:01.342 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:01.342 #define SPDK_CONFIG_VHOST 1 00:13:01.342 #define SPDK_CONFIG_VIRTIO 1 00:13:01.342 #undef SPDK_CONFIG_VTUNE 00:13:01.342 #define SPDK_CONFIG_VTUNE_DIR 00:13:01.342 #define SPDK_CONFIG_WERROR 1 00:13:01.342 #define SPDK_CONFIG_WPDK_DIR 00:13:01.342 #undef SPDK_CONFIG_XNVME 00:13:01.342 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:01.342 13:42:03 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:01.343 13:42:03 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:01.343 13:42:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.343 13:42:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.343 13:42:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.343 13:42:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.343 13:42:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.343 13:42:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.343 13:42:03 -- paths/export.sh@5 -- # export PATH 00:13:01.343 13:42:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.343 13:42:03 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:01.343 13:42:03 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:01.343 13:42:03 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:01.343 13:42:03 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:01.343 13:42:03 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:01.343 13:42:03 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:01.343 13:42:03 -- pm/common@67 -- # TEST_TAG=N/A 00:13:01.343 13:42:03 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:13:01.343 13:42:03 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:01.343 13:42:03 -- pm/common@71 -- # uname -s 00:13:01.343 13:42:03 -- pm/common@71 -- # PM_OS=Linux 00:13:01.343 13:42:03 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:01.343 13:42:03 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:13:01.343 13:42:03 -- pm/common@76 -- # [[ Linux == Linux ]] 00:13:01.343 13:42:03 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:13:01.343 13:42:03 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:13:01.343 13:42:03 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:01.343 13:42:03 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:01.343 13:42:03 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:13:01.343 13:42:03 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:13:01.343 13:42:03 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:01.343 13:42:03 -- common/autotest_common.sh@57 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:13:01.343 13:42:03 -- common/autotest_common.sh@61 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:01.343 13:42:03 -- common/autotest_common.sh@63 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:13:01.343 13:42:03 -- common/autotest_common.sh@65 -- # : 1 00:13:01.343 13:42:03 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:01.343 13:42:03 -- common/autotest_common.sh@67 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:13:01.343 13:42:03 -- common/autotest_common.sh@69 -- # : 00:13:01.343 13:42:03 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:13:01.343 13:42:03 -- common/autotest_common.sh@71 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:13:01.343 13:42:03 -- common/autotest_common.sh@73 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:13:01.343 13:42:03 -- common/autotest_common.sh@75 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:13:01.343 13:42:03 -- common/autotest_common.sh@77 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:01.343 13:42:03 -- common/autotest_common.sh@79 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:13:01.343 13:42:03 -- common/autotest_common.sh@81 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:13:01.343 13:42:03 -- common/autotest_common.sh@83 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:13:01.343 13:42:03 -- common/autotest_common.sh@85 -- # : 1 00:13:01.343 13:42:03 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:13:01.343 13:42:03 -- common/autotest_common.sh@87 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:13:01.343 13:42:03 -- common/autotest_common.sh@89 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:13:01.343 13:42:03 -- common/autotest_common.sh@91 -- # : 1 00:13:01.343 13:42:03 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:13:01.343 13:42:03 -- common/autotest_common.sh@93 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:13:01.343 13:42:03 -- common/autotest_common.sh@95 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:01.343 13:42:03 -- common/autotest_common.sh@97 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:13:01.343 13:42:03 -- common/autotest_common.sh@99 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:13:01.343 13:42:03 -- common/autotest_common.sh@101 -- # : rdma 00:13:01.343 13:42:03 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:01.343 13:42:03 -- common/autotest_common.sh@103 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:13:01.343 13:42:03 -- common/autotest_common.sh@105 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:13:01.343 13:42:03 -- common/autotest_common.sh@107 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:13:01.343 13:42:03 -- common/autotest_common.sh@109 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:13:01.343 13:42:03 -- common/autotest_common.sh@111 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:13:01.343 13:42:03 -- common/autotest_common.sh@113 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:13:01.343 13:42:03 -- common/autotest_common.sh@115 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:13:01.343 13:42:03 -- common/autotest_common.sh@117 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:01.343 13:42:03 -- common/autotest_common.sh@119 -- # : 0 00:13:01.343 13:42:03 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:13:01.343 13:42:03 -- common/autotest_common.sh@121 -- # : 1 00:13:01.343 13:42:03 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:13:01.344 13:42:03 -- common/autotest_common.sh@123 -- # : 00:13:01.344 13:42:03 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:01.344 13:42:03 -- common/autotest_common.sh@125 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:13:01.344 13:42:03 -- common/autotest_common.sh@127 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:13:01.344 13:42:03 -- common/autotest_common.sh@129 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:13:01.344 13:42:03 -- common/autotest_common.sh@131 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:13:01.344 13:42:03 -- common/autotest_common.sh@133 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:13:01.344 13:42:03 -- common/autotest_common.sh@135 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:13:01.344 13:42:03 -- common/autotest_common.sh@137 -- # : 00:13:01.344 13:42:03 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:13:01.344 13:42:03 -- common/autotest_common.sh@139 -- # : true 00:13:01.344 13:42:03 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:13:01.344 13:42:03 -- common/autotest_common.sh@141 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:13:01.344 13:42:03 -- common/autotest_common.sh@143 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:13:01.344 13:42:03 -- common/autotest_common.sh@145 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:13:01.344 13:42:03 -- common/autotest_common.sh@147 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:13:01.344 13:42:03 -- common/autotest_common.sh@149 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:13:01.344 13:42:03 -- common/autotest_common.sh@151 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:13:01.344 13:42:03 -- common/autotest_common.sh@153 -- # : mlx5 00:13:01.344 13:42:03 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:13:01.344 13:42:03 -- common/autotest_common.sh@155 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:13:01.344 13:42:03 -- common/autotest_common.sh@157 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:13:01.344 13:42:03 -- common/autotest_common.sh@159 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:13:01.344 13:42:03 -- common/autotest_common.sh@161 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:13:01.344 13:42:03 -- common/autotest_common.sh@163 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:13:01.344 13:42:03 -- common/autotest_common.sh@166 -- # : 00:13:01.344 13:42:03 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:13:01.344 13:42:03 -- common/autotest_common.sh@168 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:13:01.344 13:42:03 -- common/autotest_common.sh@170 -- # : 0 00:13:01.344 13:42:03 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:01.344 13:42:03 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.344 13:42:03 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:01.344 13:42:03 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:01.344 13:42:03 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:01.344 13:42:03 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:13:01.344 13:42:03 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.344 13:42:03 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.344 13:42:03 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.344 13:42:03 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.344 13:42:03 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:01.344 13:42:03 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:13:01.344 13:42:03 -- common/autotest_common.sh@199 -- # cat 00:13:01.344 13:42:03 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:13:01.344 13:42:03 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.344 13:42:03 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.344 13:42:03 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.344 13:42:03 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.344 13:42:03 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:13:01.344 13:42:03 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:13:01.344 13:42:03 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:01.344 13:42:03 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:01.344 13:42:03 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:01.344 13:42:03 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:01.344 13:42:03 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.344 13:42:03 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.344 13:42:03 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.344 13:42:03 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.344 13:42:03 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:01.344 13:42:03 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:01.344 13:42:03 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:13:01.344 13:42:03 -- common/autotest_common.sh@252 -- # export valgrind= 00:13:01.344 13:42:03 -- common/autotest_common.sh@252 -- # valgrind= 00:13:01.344 13:42:03 -- common/autotest_common.sh@258 -- # uname -s 00:13:01.344 13:42:03 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:13:01.344 13:42:03 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:13:01.344 13:42:03 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:13:01.344 13:42:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:01.344 13:42:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:01.344 13:42:03 -- common/autotest_common.sh@268 -- # MAKE=make 00:13:01.344 13:42:03 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:13:01.344 13:42:03 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:13:01.344 13:42:03 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:13:01.344 13:42:03 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:13:01.344 13:42:03 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:13:01.344 13:42:03 -- common/autotest_common.sh@289 -- # for i in "$@" 00:13:01.344 13:42:03 -- common/autotest_common.sh@290 -- # case "$i" in 00:13:01.344 13:42:03 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:13:01.344 13:42:03 -- common/autotest_common.sh@307 -- # [[ -z 1125146 ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@307 -- # kill -0 1125146 00:13:01.345 13:42:03 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:01.345 13:42:03 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:13:01.345 13:42:03 -- common/autotest_common.sh@320 -- # local mount target_dir 00:13:01.345 13:42:03 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:13:01.345 13:42:03 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:13:01.345 13:42:03 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:13:01.345 13:42:03 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:13:01.345 13:42:03 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.V1VJBW 00:13:01.345 13:42:03 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:01.345 13:42:03 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.V1VJBW/tests/target /tmp/spdk.V1VJBW 00:13:01.345 13:42:03 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@316 -- # df -T 00:13:01.345 13:42:03 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=995188736 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4289241088 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=51489538048 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994586112 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=10505048064 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=30938025984 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997291008 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=59265024 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=12376203264 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398919680 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=22716416 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996877312 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997295104 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=417792 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199451648 00:13:01.345 13:42:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199455744 00:13:01.345 13:42:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:13:01.345 13:42:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:01.345 13:42:03 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:13:01.345 * Looking for test storage... 00:13:01.345 13:42:03 -- common/autotest_common.sh@357 -- # local target_space new_size 00:13:01.345 13:42:03 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:13:01.345 13:42:03 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.345 13:42:03 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:01.345 13:42:03 -- common/autotest_common.sh@361 -- # mount=/ 00:13:01.345 13:42:03 -- common/autotest_common.sh@363 -- # target_space=51489538048 00:13:01.345 13:42:03 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:13:01.345 13:42:03 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:13:01.345 13:42:03 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@370 -- # new_size=12719640576 00:13:01.345 13:42:03 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:01.345 13:42:03 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.345 13:42:03 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.345 13:42:03 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.345 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.345 13:42:03 -- common/autotest_common.sh@378 -- # return 0 00:13:01.345 13:42:03 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:01.345 13:42:03 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:01.345 13:42:03 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:01.345 13:42:03 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:01.345 13:42:03 -- common/autotest_common.sh@1673 -- # true 00:13:01.345 13:42:03 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:01.345 13:42:03 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:01.345 13:42:03 -- common/autotest_common.sh@27 -- # exec 00:13:01.345 13:42:03 -- common/autotest_common.sh@29 -- # exec 00:13:01.345 13:42:03 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:01.345 13:42:03 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:01.345 13:42:03 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:01.345 13:42:03 -- common/autotest_common.sh@18 -- # set -x 00:13:01.345 13:42:03 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.345 13:42:03 -- nvmf/common.sh@7 -- # uname -s 00:13:01.345 13:42:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.345 13:42:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.345 13:42:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.345 13:42:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.345 13:42:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.345 13:42:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.345 13:42:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.346 13:42:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.346 13:42:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.346 13:42:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.346 13:42:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:01.346 13:42:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:13:01.346 13:42:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.346 13:42:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.346 13:42:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.346 13:42:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.346 13:42:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:01.346 13:42:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.346 13:42:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.346 13:42:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.346 13:42:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.346 13:42:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.346 13:42:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.346 13:42:04 -- paths/export.sh@5 -- # export PATH 00:13:01.346 13:42:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.346 13:42:04 -- nvmf/common.sh@47 -- # : 0 00:13:01.346 13:42:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.346 13:42:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.346 13:42:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.346 13:42:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.346 13:42:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.346 13:42:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.346 13:42:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.346 13:42:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.346 13:42:04 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:13:01.346 13:42:04 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:13:01.346 13:42:04 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:01.346 13:42:04 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:13:01.346 13:42:04 -- target/device_removal.sh@18 -- # nvmftestinit 00:13:01.346 13:42:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:01.346 13:42:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.346 13:42:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:01.346 13:42:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:01.346 13:42:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:01.346 13:42:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.346 13:42:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.346 13:42:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.346 13:42:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:01.346 13:42:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:01.346 13:42:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.346 13:42:04 -- common/autotest_common.sh@10 -- # set +x 00:13:03.871 13:42:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:03.871 13:42:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.871 13:42:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.871 13:42:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.871 13:42:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.871 13:42:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.871 13:42:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.871 13:42:06 -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.871 13:42:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.871 13:42:06 -- nvmf/common.sh@296 -- # e810=() 00:13:03.871 13:42:06 -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.871 13:42:06 -- nvmf/common.sh@297 -- # x722=() 00:13:03.871 13:42:06 -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.871 13:42:06 -- nvmf/common.sh@298 -- # mlx=() 00:13:03.871 13:42:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.871 13:42:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.871 13:42:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.871 13:42:06 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:03.871 13:42:06 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:03.871 13:42:06 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:03.871 13:42:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.871 13:42:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.871 13:42:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:13:03.871 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:13:03.871 13:42:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:03.871 13:42:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.871 13:42:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:13:03.871 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:13:03.871 13:42:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:03.871 13:42:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.871 13:42:06 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:03.871 13:42:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.871 13:42:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.871 13:42:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:03.871 13:42:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.871 13:42:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:13:03.871 Found net devices under 0000:81:00.0: mlx_0_0 00:13:03.871 13:42:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.871 13:42:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.871 13:42:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.871 13:42:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:03.871 13:42:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.871 13:42:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:13:03.872 Found net devices under 0000:81:00.1: mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.872 13:42:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:03.872 13:42:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:03.872 13:42:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:03.872 13:42:06 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:03.872 13:42:06 -- nvmf/common.sh@58 -- # uname 00:13:03.872 13:42:06 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:03.872 13:42:06 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:03.872 13:42:06 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:03.872 13:42:06 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:03.872 13:42:06 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:03.872 13:42:06 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:03.872 13:42:06 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:03.872 13:42:06 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:03.872 13:42:06 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:03.872 13:42:06 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:03.872 13:42:06 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:03.872 13:42:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:03.872 13:42:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:03.872 13:42:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:03.872 13:42:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:03.872 13:42:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:03.872 13:42:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@105 -- # continue 2 00:13:03.872 13:42:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@105 -- # continue 2 00:13:03.872 13:42:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:03.872 13:42:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:03.872 13:42:06 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:03.872 13:42:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:03.872 309: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:03.872 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:13:03.872 altname enp129s0f0np0 00:13:03.872 inet 192.168.100.8/24 scope global mlx_0_0 00:13:03.872 valid_lft forever preferred_lft forever 00:13:03.872 13:42:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:03.872 13:42:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:03.872 13:42:06 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:03.872 13:42:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:03.872 310: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:03.872 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:13:03.872 altname enp129s0f1np1 00:13:03.872 inet 192.168.100.9/24 scope global mlx_0_1 00:13:03.872 valid_lft forever preferred_lft forever 00:13:03.872 13:42:06 -- nvmf/common.sh@411 -- # return 0 00:13:03.872 13:42:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:03.872 13:42:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:03.872 13:42:06 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:03.872 13:42:06 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:03.872 13:42:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:03.872 13:42:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:03.872 13:42:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:03.872 13:42:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:03.872 13:42:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:03.872 13:42:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@105 -- # continue 2 00:13:03.872 13:42:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.872 13:42:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:03.872 13:42:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@105 -- # continue 2 00:13:03.872 13:42:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:03.872 13:42:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:03.872 13:42:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:03.872 13:42:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:03.872 13:42:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:03.872 13:42:06 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:03.872 192.168.100.9' 00:13:03.872 13:42:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:03.872 192.168.100.9' 00:13:03.872 13:42:06 -- nvmf/common.sh@446 -- # head -n 1 00:13:03.872 13:42:06 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:03.872 13:42:06 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:03.872 192.168.100.9' 00:13:03.872 13:42:06 -- nvmf/common.sh@447 -- # tail -n +2 00:13:03.872 13:42:06 -- nvmf/common.sh@447 -- # head -n 1 00:13:03.872 13:42:06 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:03.872 13:42:06 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:03.872 13:42:06 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:03.872 13:42:06 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:03.872 13:42:06 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:03.872 13:42:06 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:03.872 13:42:06 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:13:03.872 13:42:06 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:13:03.872 13:42:06 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:13:03.872 13:42:06 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:13:03.872 13:42:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:03.872 13:42:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.872 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:04.129 ************************************ 00:13:04.129 START TEST nvmf_device_removal_pci_remove_no_srq 00:13:04.129 ************************************ 00:13:04.129 13:42:06 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:13:04.129 13:42:06 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:13:04.129 13:42:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:04.129 13:42:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:04.129 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:04.129 13:42:06 -- nvmf/common.sh@470 -- # nvmfpid=1127198 00:13:04.129 13:42:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:04.130 13:42:06 -- nvmf/common.sh@471 -- # waitforlisten 1127198 00:13:04.130 13:42:06 -- common/autotest_common.sh@817 -- # '[' -z 1127198 ']' 00:13:04.130 13:42:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.130 13:42:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.130 13:42:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.130 13:42:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.130 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:04.130 [2024-04-18 13:42:06.817062] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:04.130 [2024-04-18 13:42:06.817166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.130 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.387 [2024-04-18 13:42:06.949953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:04.387 [2024-04-18 13:42:07.075145] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.387 [2024-04-18 13:42:07.075217] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.387 [2024-04-18 13:42:07.075242] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.387 [2024-04-18 13:42:07.075257] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.387 [2024-04-18 13:42:07.075270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.387 [2024-04-18 13:42:07.075362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.387 [2024-04-18 13:42:07.075370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.644 13:42:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:04.644 13:42:07 -- common/autotest_common.sh@850 -- # return 0 00:13:04.644 13:42:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:04.644 13:42:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:04.644 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.644 13:42:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.644 13:42:07 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:13:04.644 13:42:07 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:13:04.644 13:42:07 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:13:04.644 13:42:07 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:13:04.644 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.644 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.644 [2024-04-18 13:42:07.350931] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18aaa30/0x18aef20) succeed. 00:13:04.644 [2024-04-18 13:42:07.363472] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18abf30/0x18f05b0) succeed. 00:13:04.644 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.644 13:42:07 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:13:04.644 13:42:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:04.644 13:42:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:04.645 13:42:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:04.645 13:42:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:04.645 13:42:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:04.645 13:42:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:04.645 13:42:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.645 13:42:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:04.645 13:42:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:04.645 13:42:07 -- nvmf/common.sh@105 -- # continue 2 00:13:04.645 13:42:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:04.645 13:42:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.645 13:42:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:04.645 13:42:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:04.645 13:42:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:04.645 13:42:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:04.645 13:42:07 -- nvmf/common.sh@105 -- # continue 2 00:13:04.645 13:42:07 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:13:04.645 13:42:07 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@25 -- # local -a dev_name 00:13:04.645 13:42:07 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:13:04.645 13:42:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:04.645 13:42:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:04.645 13:42:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:04.645 13:42:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:04.645 13:42:07 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:13:04.645 13:42:07 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:13:04.645 13:42:07 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:13:04.645 13:42:07 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:13:04.645 13:42:07 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:13:04.645 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.645 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 [2024-04-18 13:42:07.476810] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@41 -- # return 0 00:13:04.903 13:42:07 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:13:04.903 13:42:07 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:13:04.903 13:42:07 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@25 -- # local -a dev_name 00:13:04.903 13:42:07 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:13:04.903 13:42:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:04.903 13:42:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:04.903 13:42:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:04.903 13:42:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:04.903 13:42:07 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:13:04.903 13:42:07 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:13:04.903 13:42:07 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:13:04.903 13:42:07 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:13:04.903 13:42:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:04.903 [2024-04-18 13:42:07.557634] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:13:04.903 13:42:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.903 13:42:07 -- target/device_removal.sh@41 -- # return 0 00:13:04.903 13:42:07 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@53 -- # return 0 00:13:04.903 13:42:07 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:13:04.903 13:42:07 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:13:04.903 13:42:07 -- target/device_removal.sh@87 -- # local dev_names 00:13:04.903 13:42:07 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:04.903 13:42:07 -- target/device_removal.sh@91 -- # bdevperf_pid=1127497 00:13:04.903 13:42:07 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:04.903 13:42:07 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:13:04.903 13:42:07 -- target/device_removal.sh@94 -- # waitforlisten 1127497 /var/tmp/bdevperf.sock 00:13:04.903 13:42:07 -- common/autotest_common.sh@817 -- # '[' -z 1127497 ']' 00:13:04.903 13:42:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.903 13:42:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.903 13:42:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.903 13:42:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.903 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:13:05.469 13:42:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:05.469 13:42:08 -- common/autotest_common.sh@850 -- # return 0 00:13:05.469 13:42:08 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:13:05.469 13:42:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.469 13:42:08 -- common/autotest_common.sh@10 -- # set +x 00:13:05.469 13:42:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.469 13:42:08 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:13:05.469 13:42:08 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:13:05.469 13:42:08 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:13:05.469 13:42:08 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:13:05.469 13:42:08 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:13:05.469 13:42:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.469 13:42:08 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:13:05.469 13:42:08 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:13:05.469 13:42:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.469 13:42:08 -- common/autotest_common.sh@10 -- # set +x 00:13:05.469 Nvme_mlx_0_0n1 00:13:05.469 13:42:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.469 13:42:08 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:13:05.469 13:42:08 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:13:05.469 13:42:08 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:13:05.469 13:42:08 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:13:05.469 13:42:08 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:13:05.469 13:42:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:05.469 13:42:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:05.469 13:42:08 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:13:05.469 13:42:08 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:13:05.469 13:42:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.469 13:42:08 -- common/autotest_common.sh@10 -- # set +x 00:13:05.469 Nvme_mlx_0_1n1 00:13:05.469 13:42:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.469 13:42:08 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=1127647 00:13:05.469 13:42:08 -- target/device_removal.sh@112 -- # sleep 5 00:13:05.469 13:42:08 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:13:10.728 13:42:13 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:13:10.728 13:42:13 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:13:10.728 13:42:13 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/infiniband 00:13:10.728 13:42:13 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:13:10.728 13:42:13 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:13:10.728 13:42:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:10.728 13:42:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:10.728 13:42:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.728 13:42:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.728 13:42:13 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:13:10.728 13:42:13 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:13:10.728 13:42:13 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0 00:13:10.728 13:42:13 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:13:10.728 13:42:13 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:13:10.728 13:42:13 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:10.728 13:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.728 13:42:13 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:10.728 13:42:13 -- common/autotest_common.sh@10 -- # set +x 00:13:10.728 13:42:13 -- target/device_removal.sh@77 -- # grep mlx5_0 00:13:10.728 13:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.728 mlx5_0 00:13:10.728 13:42:13 -- target/device_removal.sh@78 -- # return 0 00:13:10.728 13:42:13 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@67 -- # echo 1 00:13:10.728 13:42:13 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:10.728 13:42:13 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:13:10.728 [2024-04-18 13:42:13.318933] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:13:10.728 [2024-04-18 13:42:13.319068] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:10.728 [2024-04-18 13:42:13.319208] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:10.728 [2024-04-18 13:42:13.319235] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 68 00:13:10.728 [2024-04-18 13:42:13.319250] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:13:10.728 [2024-04-18 13:42:13.319263] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319275] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319287] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319299] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319311] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319329] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319341] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319354] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319375] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:10.728 [2024-04-18 13:42:13.319387] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:10.728 [2024-04-18 13:42:13.319398] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319409] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319424] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319437] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319450] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319461] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319473] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319484] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319495] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319506] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319518] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319529] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319541] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319552] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319563] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319575] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319586] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319597] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319609] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319621] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319633] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319644] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319656] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319667] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319679] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319690] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319701] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319713] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319724] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319735] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319747] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:10.728 [2024-04-18 13:42:13.319758] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:10.728 [2024-04-18 13:42:13.319769] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319782] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319794] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319805] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319817] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319832] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319844] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319855] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319867] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319879] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319890] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319902] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319913] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319924] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319936] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319957] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319968] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.319987] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.319999] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.320011] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.320022] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.320034] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.320045] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.320056] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.728 [2024-04-18 13:42:13.320068] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.728 [2024-04-18 13:42:13.320079] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320091] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320101] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320113] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320125] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320137] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320148] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320160] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320173] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320185] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320197] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320209] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320220] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320232] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320243] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320255] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320266] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320278] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320289] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320300] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320311] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320323] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320338] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320350] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:10.729 [2024-04-18 13:42:13.320361] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:10.729 [2024-04-18 13:42:13.320372] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320384] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320396] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320407] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320418] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320430] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320441] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320452] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320464] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320485] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320497] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320509] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320521] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320532] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320545] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320556] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320567] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:10.729 [2024-04-18 13:42:13.320579] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:10.729 [2024-04-18 13:42:13.320590] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320602] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320623] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320634] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320646] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320658] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320669] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320681] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320692] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320703] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320714] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320726] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320737] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320749] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320768] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320780] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320791] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:10.729 [2024-04-18 13:42:13.320802] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:10.729 [2024-04-18 13:42:13.320813] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320825] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320840] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320851] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320863] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320874] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:10.729 [2024-04-18 13:42:13.320886] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:10.729 [2024-04-18 13:42:13.320897] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:17.328 13:42:19 -- target/device_removal.sh@147 -- # seq 1 10 00:13:17.328 13:42:19 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:13:17.328 13:42:19 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:13:17.328 13:42:19 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:13:17.328 13:42:19 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:17.328 13:42:19 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:17.328 13:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.328 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.328 13:42:19 -- target/device_removal.sh@77 -- # grep mlx5_0 00:13:17.328 13:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.328 13:42:19 -- target/device_removal.sh@78 -- # return 1 00:13:17.328 13:42:19 -- target/device_removal.sh@149 -- # break 00:13:17.328 13:42:19 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:17.328 13:42:19 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:17.328 13:42:19 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:17.328 13:42:19 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:17.328 13:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.328 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.328 13:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.328 13:42:19 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:13:17.328 13:42:19 -- target/device_removal.sh@160 -- # rescan_pci 00:13:17.328 13:42:19 -- target/device_removal.sh@57 -- # echo 1 00:13:17.328 13:42:20 -- target/device_removal.sh@162 -- # seq 1 10 00:13:17.328 13:42:20 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:13:17.328 13:42:20 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net 00:13:17.328 13:42:20 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:13:17.328 13:42:20 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:13:17.328 13:42:20 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:13:17.328 13:42:20 -- target/device_removal.sh@171 -- # break 00:13:17.328 13:42:20 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:13:17.328 13:42:20 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:13:17.585 [2024-04-18 13:42:20.353722] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18aaa30/0x18aef20) succeed. 00:13:17.585 [2024-04-18 13:42:20.353837] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:13:20.864 13:42:23 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:13:20.864 13:42:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.864 13:42:23 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:13:20.864 13:42:23 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:13:20.864 13:42:23 -- target/device_removal.sh@186 -- # seq 1 10 00:13:20.864 13:42:23 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:13:20.864 13:42:23 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:20.864 13:42:23 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:20.864 13:42:23 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:20.864 13:42:23 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:20.864 13:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.864 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:13:20.864 [2024-04-18 13:42:23.272004] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:20.864 [2024-04-18 13:42:23.272056] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:13:20.864 [2024-04-18 13:42:23.272078] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:13:20.864 [2024-04-18 13:42:23.272097] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:13:20.864 13:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.864 13:42:23 -- target/device_removal.sh@187 -- # ib_count=2 00:13:20.864 13:42:23 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:13:20.864 13:42:23 -- target/device_removal.sh@189 -- # break 00:13:20.864 13:42:23 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:13:20.864 13:42:23 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:13:20.864 13:42:23 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1/infiniband 00:13:20.864 13:42:23 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:13:20.864 13:42:23 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:13:20.864 13:42:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.864 13:42:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.864 13:42:23 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:13:20.864 13:42:23 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:20.864 13:42:23 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:13:20.864 13:42:23 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1 00:13:20.864 13:42:23 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:13:20.865 13:42:23 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:13:20.865 13:42:23 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:20.865 13:42:23 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:20.865 13:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.865 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:13:20.865 13:42:23 -- target/device_removal.sh@77 -- # grep mlx5_1 00:13:20.865 13:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.865 mlx5_1 00:13:20.865 13:42:23 -- target/device_removal.sh@78 -- # return 0 00:13:20.865 13:42:23 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:13:20.865 13:42:23 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:13:20.865 13:42:23 -- target/device_removal.sh@67 -- # echo 1 00:13:20.865 13:42:23 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:13:20.865 13:42:23 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:20.865 13:42:23 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:13:20.865 [2024-04-18 13:42:23.365634] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:13:20.865 [2024-04-18 13:42:23.365739] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:20.865 [2024-04-18 13:42:23.373845] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:20.865 [2024-04-18 13:42:23.373883] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:13:20.865 [2024-04-18 13:42:23.373899] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:13:20.865 [2024-04-18 13:42:23.373911] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.373923] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.373934] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.373959] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.373978] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374000] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374012] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374023] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374034] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374045] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374057] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374068] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374079] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374091] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374102] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374113] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374124] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374135] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374147] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374158] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374170] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374181] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374192] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374206] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374218] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374229] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374250] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374261] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374272] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374283] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374295] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374322] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374332] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374342] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374352] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:20.865 [2024-04-18 13:42:23.374361] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374370] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374398] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374409] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374421] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374432] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374443] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374469] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374479] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374489] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374499] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374511] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374521] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374531] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374540] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374550] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374559] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.865 [2024-04-18 13:42:23.374569] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374578] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374588] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374597] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.865 [2024-04-18 13:42:23.374607] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.865 [2024-04-18 13:42:23.374616] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374625] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374635] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374644] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374654] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374663] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374673] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374683] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374692] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374702] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374711] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374721] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374730] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374740] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374749] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374758] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374768] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374777] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374787] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374797] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374807] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374816] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374825] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374835] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374844] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374854] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374863] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.374873] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374883] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374892] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374902] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374928] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374946] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374958] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374968] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.374979] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.374993] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375003] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:20.866 [2024-04-18 13:42:23.375013] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375023] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375033] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375043] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375054] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375064] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375074] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375084] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375094] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375104] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375114] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375125] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375136] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375146] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375157] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375166] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375176] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375187] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:20.866 [2024-04-18 13:42:23.375197] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375207] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375224] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375250] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375261] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375273] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375290] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375301] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375312] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375324] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375335] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375346] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375358] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375369] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375380] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.866 [2024-04-18 13:42:23.375391] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375406] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375418] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375429] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375441] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375458] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375469] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375480] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375492] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375503] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375515] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375526] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375538] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.866 [2024-04-18 13:42:23.375550] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.866 [2024-04-18 13:42:23.375561] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375572] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375583] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375594] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375605] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375617] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375628] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375639] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375650] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375661] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375672] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375683] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375694] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375706] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375717] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375728] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375739] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:20.867 [2024-04-18 13:42:23.375750] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.375761] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375772] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.375783] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375794] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375805] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375816] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375827] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:20.867 [2024-04-18 13:42:23.375838] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.375849] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375860] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375871] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375885] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.375898] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375909] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.375920] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375931] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375950] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.375962] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.375973] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376000] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.376012] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376022] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.376032] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376042] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.376052] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376062] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:20.867 [2024-04-18 13:42:23.376072] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376083] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.376093] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376104] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:20.867 [2024-04-18 13:42:23.376114] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:20.867 [2024-04-18 13:42:23.376124] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:27.463 13:42:29 -- target/device_removal.sh@147 -- # seq 1 10 00:13:27.463 13:42:29 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:13:27.463 13:42:29 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:13:27.463 13:42:29 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:13:27.463 13:42:29 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:27.463 13:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.463 13:42:29 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:27.463 13:42:29 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 13:42:29 -- target/device_removal.sh@77 -- # grep mlx5_1 00:13:27.463 13:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.463 13:42:29 -- target/device_removal.sh@78 -- # return 1 00:13:27.463 13:42:29 -- target/device_removal.sh@149 -- # break 00:13:27.463 13:42:29 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:27.463 13:42:29 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:27.463 13:42:29 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:27.463 13:42:29 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:27.463 13:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.463 13:42:29 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 13:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.463 13:42:29 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:13:27.463 13:42:29 -- target/device_removal.sh@160 -- # rescan_pci 00:13:27.463 13:42:29 -- target/device_removal.sh@57 -- # echo 1 00:13:27.463 13:42:30 -- target/device_removal.sh@162 -- # seq 1 10 00:13:27.463 13:42:30 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:13:27.463 13:42:30 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1/net 00:13:27.463 13:42:30 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:13:27.463 13:42:30 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:13:27.463 13:42:30 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:13:27.463 13:42:30 -- target/device_removal.sh@171 -- # break 00:13:27.463 13:42:30 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:13:27.463 13:42:30 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:13:27.723 [2024-04-18 13:42:30.324234] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1931c50/0x18f05b0) succeed. 00:13:27.723 [2024-04-18 13:42:30.324357] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:13:31.003 13:42:33 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:13:31.003 13:42:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:31.003 13:42:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.003 13:42:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:31.003 13:42:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.003 13:42:33 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:13:31.003 13:42:33 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:13:31.003 13:42:33 -- target/device_removal.sh@186 -- # seq 1 10 00:13:31.003 13:42:33 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:13:31.003 13:42:33 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:31.003 13:42:33 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:31.003 13:42:33 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:31.003 13:42:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.003 13:42:33 -- common/autotest_common.sh@10 -- # set +x 00:13:31.003 13:42:33 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:31.003 [2024-04-18 13:42:33.326694] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:13:31.003 [2024-04-18 13:42:33.326766] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:13:31.003 [2024-04-18 13:42:33.326790] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:13:31.003 [2024-04-18 13:42:33.326807] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:13:31.003 13:42:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.003 13:42:33 -- target/device_removal.sh@187 -- # ib_count=2 00:13:31.004 13:42:33 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:13:31.004 13:42:33 -- target/device_removal.sh@189 -- # break 00:13:31.004 13:42:33 -- target/device_removal.sh@200 -- # stop_bdevperf 00:13:31.004 13:42:33 -- target/device_removal.sh@116 -- # wait 1127647 00:14:38.669 0 00:14:38.669 13:43:38 -- target/device_removal.sh@118 -- # killprocess 1127497 00:14:38.669 13:43:38 -- common/autotest_common.sh@936 -- # '[' -z 1127497 ']' 00:14:38.669 13:43:38 -- common/autotest_common.sh@940 -- # kill -0 1127497 00:14:38.669 13:43:38 -- common/autotest_common.sh@941 -- # uname 00:14:38.669 13:43:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.669 13:43:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1127497 00:14:38.669 13:43:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:38.669 13:43:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:38.669 13:43:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1127497' 00:14:38.669 killing process with pid 1127497 00:14:38.669 13:43:38 -- common/autotest_common.sh@955 -- # kill 1127497 00:14:38.669 13:43:38 -- common/autotest_common.sh@960 -- # wait 1127497 00:14:38.669 13:43:38 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:14:38.669 13:43:38 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:14:38.669 [2024-04-18 13:42:07.613106] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:38.669 [2024-04-18 13:42:07.613220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127497 ] 00:14:38.669 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.669 [2024-04-18 13:42:07.700201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.669 [2024-04-18 13:42:07.825147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.669 Running I/O for 90 seconds... 00:14:38.669 [2024-04-18 13:42:13.313122] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:38.669 [2024-04-18 13:42:13.313174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.669 [2024-04-18 13:42:13.313195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.669 [2024-04-18 13:42:13.313213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.669 [2024-04-18 13:42:13.313228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.669 [2024-04-18 13:42:13.313244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.669 [2024-04-18 13:42:13.313261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.669 [2024-04-18 13:42:13.313278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.669 [2024-04-18 13:42:13.313295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.669 [2024-04-18 13:42:13.315046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.669 [2024-04-18 13:42:13.315075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.669 [2024-04-18 13:42:13.315114] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:38.670 [2024-04-18 13:42:13.323094] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.333118] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.344225] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.354250] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.365019] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.375763] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.386464] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.396538] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.406855] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.417001] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.427013] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.437489] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.447516] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.458610] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.468637] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.479131] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.490354] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.500828] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.510854] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.522175] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.533168] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.543783] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.553809] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.564300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.574691] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.584717] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.595084] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.605367] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.616029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.626053] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.636079] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.649511] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.659524] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.669552] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.679588] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.690677] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.700709] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.710738] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.721829] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.732444] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.742470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.752932] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.764037] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.774449] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.784473] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.794608] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.805534] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.815561] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.825586] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.835929] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.845994] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.856111] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.867292] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.877924] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.888397] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.898848] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.908873] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.918961] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.929371] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.939665] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.949695] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.960162] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.970186] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.980369] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:13.991484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.002033] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.012056] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.022441] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.033536] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.044017] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.054041] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.064154] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.074182] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.084209] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.094454] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.104765] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.115523] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.125582] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.136141] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.146929] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.156961] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.167333] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.177371] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.191580] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.202776] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.213320] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.223346] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.233692] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.244242] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.670 [2024-04-18 13:42:14.254267] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.264306] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.275112] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.285565] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.295593] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.305693] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.316643] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.671 [2024-04-18 13:42:14.317592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.671 [2024-04-18 13:42:14.317619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.671 [2024-04-18 13:42:14.317669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.671 [2024-04-18 13:42:14.317719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.671 [2024-04-18 13:42:14.317752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.671 [2024-04-18 13:42:14.317784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.317977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.317993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.671 [2024-04-18 13:42:14.318733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x180f00 00:14:38.671 [2024-04-18 13:42:14.318747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.318974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.318990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.672 [2024-04-18 13:42:14.319931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x180f00 00:14:38.672 [2024-04-18 13:42:14.319957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.319976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.319992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.320972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.320990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.321022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.321037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.321054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.321073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.321090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.321105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.321123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x180f00 00:14:38.673 [2024-04-18 13:42:14.321137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.673 [2024-04-18 13:42:14.321154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.321815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x180f00 00:14:38.674 [2024-04-18 13:42:14.321831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.338707] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:14:38.674 [2024-04-18 13:42:14.338809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:38.674 [2024-04-18 13:42:14.338831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:38.674 [2024-04-18 13:42:14.338846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73680 len:8 PRP1 0x0 PRP2 0x0 00:14:38.674 [2024-04-18 13:42:14.338861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.674 [2024-04-18 13:42:14.341040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.674 [2024-04-18 13:42:14.341457] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.674 [2024-04-18 13:42:14.341486] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.674 [2024-04-18 13:42:14.341500] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.674 [2024-04-18 13:42:14.341530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.674 [2024-04-18 13:42:14.341548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.674 [2024-04-18 13:42:14.341571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.674 [2024-04-18 13:42:14.341588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.674 [2024-04-18 13:42:14.341603] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.674 [2024-04-18 13:42:14.341638] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.674 [2024-04-18 13:42:14.341657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:15.344618] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.675 [2024-04-18 13:42:15.344679] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:15.344694] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:15.344731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:15.344750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:15.344774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:15.344790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:15.344806] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:15.344847] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:15.344867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:16.349154] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.675 [2024-04-18 13:42:16.349215] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:16.349232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:16.349278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:16.349297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:16.349332] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:16.349350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:16.349366] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:16.349409] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:16.349429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:18.355377] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:18.355439] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:18.355480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:18.355500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:18.356592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:18.356619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:18.356636] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:18.356681] bdev_nvme.c:2871:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:14:38.675 [2024-04-18 13:42:18.357681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:18.357766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:19.360777] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:19.360828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:19.360868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:19.360888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:19.361403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:19.361430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:19.361448] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:19.362189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:19.362219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:21.367209] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:21.367274] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:21.367310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:21.367326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:21.367346] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:21.367359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:21.367372] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:21.367406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:21.367423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:23.372417] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.675 [2024-04-18 13:42:23.372450] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:38.675 [2024-04-18 13:42:23.372507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:23.372522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:38.675 [2024-04-18 13:42:23.372542] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:38.675 [2024-04-18 13:42:23.372555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:38.675 [2024-04-18 13:42:23.372569] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:38.675 [2024-04-18 13:42:23.372599] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.675 [2024-04-18 13:42:23.372615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:38.675 [2024-04-18 13:42:23.372661] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:38.675 [2024-04-18 13:42:23.372681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.675 [2024-04-18 13:42:23.372695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.675 [2024-04-18 13:42:23.372708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.675 [2024-04-18 13:42:23.372720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.675 [2024-04-18 13:42:23.372733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.675 [2024-04-18 13:42:23.372744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.675 [2024-04-18 13:42:23.372756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.675 [2024-04-18 13:42:23.372768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32525 cdw0:16 sqhd:e5dc p:0 m:0 dnr:0 00:14:38.675 [2024-04-18 13:42:23.378090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.675 [2024-04-18 13:42:23.378116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.675 [2024-04-18 13:42:23.378158] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:38.675 [2024-04-18 13:42:23.382633] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.392657] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.402682] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.412707] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.422733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.432760] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.442787] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.452814] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.462841] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.472866] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.482894] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.492933] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.502951] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.512977] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.523003] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.533030] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.543056] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.553083] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.563111] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.573136] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.675 [2024-04-18 13:42:23.583163] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.593190] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.603215] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.613241] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.623267] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.633293] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.643319] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.653345] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.663380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.673406] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.683433] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.693460] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.703485] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.713512] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.723537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.733563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.743589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.753613] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.763639] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.773666] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.783693] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.793729] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.803743] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.813769] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.823797] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.833823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.843849] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.853875] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.863902] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.873942] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.883964] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.893990] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.904016] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.914042] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.924070] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.934098] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.944124] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.954151] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.964176] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.974201] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.984237] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:23.994264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.004289] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.014316] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.024343] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.034370] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.044402] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.054430] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.064457] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.074484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.084513] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.094538] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.104564] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.114589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.124614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.134642] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.144669] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.154697] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.164723] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.174749] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.184776] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.194802] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.204829] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.214856] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.224882] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.234909] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.244942] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.254965] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.265006] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.275030] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.285055] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.295080] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.305107] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.315134] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.325158] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.335184] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.345211] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.355250] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.365274] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.375399] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:38.676 [2024-04-18 13:42:24.380635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf900 00:14:38.676 [2024-04-18 13:42:24.380666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.676 [2024-04-18 13:42:24.380706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf900 00:14:38.676 [2024-04-18 13:42:24.380721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.676 [2024-04-18 13:42:24.380736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf900 00:14:38.676 [2024-04-18 13:42:24.380750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.676 [2024-04-18 13:42:24.380764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf900 00:14:38.676 [2024-04-18 13:42:24.380777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.676 [2024-04-18 13:42:24.380792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf900 00:14:38.676 [2024-04-18 13:42:24.380805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.380819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.380833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.380847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.380872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.380900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.380930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.380954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.380995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf900 00:14:38.677 [2024-04-18 13:42:24.381906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.677 [2024-04-18 13:42:24.381921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.381934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.381972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.381986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf900 00:14:38.678 [2024-04-18 13:42:24.382706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.678 [2024-04-18 13:42:24.382739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.678 [2024-04-18 13:42:24.382765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.678 [2024-04-18 13:42:24.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.678 [2024-04-18 13:42:24.382817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.678 [2024-04-18 13:42:24.382831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.382857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.382872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.382886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.382912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.382967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.382981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.382996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.679 [2024-04-18 13:42:24.383981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.679 [2024-04-18 13:42:24.383995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.384412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.680 [2024-04-18 13:42:24.384440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32525 cdw0:46076090 sqhd:4b93 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.399089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:38.680 [2024-04-18 13:42:24.399112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:38.680 [2024-04-18 13:42:24.399140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12752 len:8 PRP1 0x0 PRP2 0x0 00:14:38.680 [2024-04-18 13:42:24.399154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.680 [2024-04-18 13:42:24.399231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:24.417542] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.680 [2024-04-18 13:42:24.417568] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:24.417596] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:24.417627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:24.417642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.680 [2024-04-18 13:42:24.417675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.680 [2024-04-18 13:42:24.417692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.680 [2024-04-18 13:42:24.417704] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.680 [2024-04-18 13:42:24.417750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.680 [2024-04-18 13:42:24.417768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:24.425181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:38.680 [2024-04-18 13:42:25.421821] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.680 [2024-04-18 13:42:25.421850] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:25.421878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:25.421914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:25.421962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.680 [2024-04-18 13:42:25.421989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.680 [2024-04-18 13:42:25.422004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.680 [2024-04-18 13:42:25.422016] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.680 [2024-04-18 13:42:25.422044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.680 [2024-04-18 13:42:25.422059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:26.424640] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:38.680 [2024-04-18 13:42:26.424698] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:26.424727] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:26.424760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:26.424778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.680 [2024-04-18 13:42:26.424798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.680 [2024-04-18 13:42:26.424813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.680 [2024-04-18 13:42:26.424827] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.680 [2024-04-18 13:42:26.424867] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.680 [2024-04-18 13:42:26.424885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:28.432333] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:28.432384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:28.432437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:28.432454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.680 [2024-04-18 13:42:28.433539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.680 [2024-04-18 13:42:28.433563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.680 [2024-04-18 13:42:28.433592] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.680 [2024-04-18 13:42:28.433660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.680 [2024-04-18 13:42:28.433680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:30.439486] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:30.439524] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:30.439575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:30.439591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.680 [2024-04-18 13:42:30.440578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.680 [2024-04-18 13:42:30.440607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.680 [2024-04-18 13:42:30.440648] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.680 [2024-04-18 13:42:30.440723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.680 [2024-04-18 13:42:30.440764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.680 [2024-04-18 13:42:32.446977] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:38.680 [2024-04-18 13:42:32.447045] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:38.680 [2024-04-18 13:42:32.447108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:38.680 [2024-04-18 13:42:32.447125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:38.681 [2024-04-18 13:42:32.447571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:38.681 [2024-04-18 13:42:32.447591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:38.681 [2024-04-18 13:42:32.447605] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:38.681 [2024-04-18 13:42:32.447645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:38.681 [2024-04-18 13:42:32.447673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:38.681 [2024-04-18 13:42:33.512279] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:38.681 00:14:38.681 Latency(us) 00:14:38.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.681 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:38.681 Verification LBA range: start 0x0 length 0x8000 00:14:38.681 Nvme_mlx_0_0n1 : 90.01 8995.25 35.14 0.00 0.00 14208.28 1686.95 12079595.52 00:14:38.681 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:38.681 Verification LBA range: start 0x0 length 0x8000 00:14:38.681 Nvme_mlx_0_1n1 : 90.01 8142.28 31.81 0.00 0.00 15693.83 3131.16 11085390.13 00:14:38.681 =================================================================================================================== 00:14:38.681 Total : 17137.53 66.94 0.00 0.00 14914.10 1686.95 12079595.52 00:14:38.681 Received shutdown signal, test time was about 90.000000 seconds 00:14:38.681 00:14:38.681 Latency(us) 00:14:38.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.681 =================================================================================================================== 00:14:38.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.681 13:43:38 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:14:38.681 13:43:38 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:14:38.681 13:43:38 -- target/device_removal.sh@202 -- # killprocess 1127198 00:14:38.681 13:43:38 -- common/autotest_common.sh@936 -- # '[' -z 1127198 ']' 00:14:38.681 13:43:38 -- common/autotest_common.sh@940 -- # kill -0 1127198 00:14:38.681 13:43:38 -- common/autotest_common.sh@941 -- # uname 00:14:38.681 13:43:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.681 13:43:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1127198 00:14:38.681 13:43:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:38.681 13:43:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:38.681 13:43:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1127198' 00:14:38.681 killing process with pid 1127198 00:14:38.681 13:43:38 -- common/autotest_common.sh@955 -- # kill 1127198 00:14:38.681 13:43:38 -- common/autotest_common.sh@960 -- # wait 1127198 00:14:38.681 13:43:39 -- target/device_removal.sh@203 -- # nvmfpid= 00:14:38.681 13:43:39 -- target/device_removal.sh@205 -- # return 0 00:14:38.681 00:14:38.681 real 1m32.667s 00:14:38.681 user 4m23.575s 00:14:38.681 sys 0m3.569s 00:14:38.681 13:43:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 ************************************ 00:14:38.681 END TEST nvmf_device_removal_pci_remove_no_srq 00:14:38.681 ************************************ 00:14:38.681 13:43:39 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:14:38.681 13:43:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:38.681 13:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 ************************************ 00:14:38.681 START TEST nvmf_device_removal_pci_remove 00:14:38.681 ************************************ 00:14:38.681 13:43:39 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:14:38.681 13:43:39 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:14:38.681 13:43:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:38.681 13:43:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 13:43:39 -- nvmf/common.sh@470 -- # nvmfpid=1138609 00:14:38.681 13:43:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:38.681 13:43:39 -- nvmf/common.sh@471 -- # waitforlisten 1138609 00:14:38.681 13:43:39 -- common/autotest_common.sh@817 -- # '[' -z 1138609 ']' 00:14:38.681 13:43:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.681 13:43:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.681 13:43:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.681 13:43:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 [2024-04-18 13:43:39.610187] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:38.681 [2024-04-18 13:43:39.610284] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.681 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.681 [2024-04-18 13:43:39.698519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:38.681 [2024-04-18 13:43:39.818744] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.681 [2024-04-18 13:43:39.818817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.681 [2024-04-18 13:43:39.818834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.681 [2024-04-18 13:43:39.818847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.681 [2024-04-18 13:43:39.818859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.681 [2024-04-18 13:43:39.818953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.681 [2024-04-18 13:43:39.818961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.681 13:43:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.681 13:43:39 -- common/autotest_common.sh@850 -- # return 0 00:14:38.681 13:43:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:38.681 13:43:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 13:43:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.681 13:43:39 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:14:38.681 13:43:39 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:14:38.681 13:43:39 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:14:38.681 13:43:39 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:38.681 13:43:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.681 13:43:39 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 [2024-04-18 13:43:39.999513] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2253a30/0x2257f20) succeed. 00:14:38.681 [2024-04-18 13:43:40.012875] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2254f30/0x22995b0) succeed. 00:14:38.681 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.681 13:43:40 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:14:38.681 13:43:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.681 13:43:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.681 13:43:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.681 13:43:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.681 13:43:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.681 13:43:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.681 13:43:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.681 13:43:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.681 13:43:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.681 13:43:40 -- nvmf/common.sh@105 -- # continue 2 00:14:38.681 13:43:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.681 13:43:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.681 13:43:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.681 13:43:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.681 13:43:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.681 13:43:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.681 13:43:40 -- nvmf/common.sh@105 -- # continue 2 00:14:38.681 13:43:40 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:38.681 13:43:40 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:38.681 13:43:40 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:14:38.681 13:43:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.681 13:43:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.681 13:43:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.681 13:43:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.681 13:43:40 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:14:38.681 13:43:40 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:14:38.681 13:43:40 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:38.681 13:43:40 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:38.681 13:43:40 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:14:38.681 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.681 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.681 13:43:40 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:14:38.681 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.681 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.681 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.681 13:43:40 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:14:38.681 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.681 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 [2024-04-18 13:43:40.224342] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@41 -- # return 0 00:14:38.682 13:43:40 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:14:38.682 13:43:40 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:38.682 13:43:40 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:38.682 13:43:40 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.682 13:43:40 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:14:38.682 13:43:40 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:38.682 13:43:40 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:38.682 13:43:40 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 [2024-04-18 13:43:40.307288] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@41 -- # return 0 00:14:38.682 13:43:40 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@53 -- # return 0 00:14:38.682 13:43:40 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:14:38.682 13:43:40 -- target/device_removal.sh@87 -- # local dev_names 00:14:38.682 13:43:40 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:38.682 13:43:40 -- target/device_removal.sh@91 -- # bdevperf_pid=1138776 00:14:38.682 13:43:40 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.682 13:43:40 -- target/device_removal.sh@94 -- # waitforlisten 1138776 /var/tmp/bdevperf.sock 00:14:38.682 13:43:40 -- common/autotest_common.sh@817 -- # '[' -z 1138776 ']' 00:14:38.682 13:43:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.682 13:43:40 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:38.682 13:43:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.682 13:43:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.682 13:43:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.682 13:43:40 -- common/autotest_common.sh@850 -- # return 0 00:14:38.682 13:43:40 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:38.682 13:43:40 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:14:38.682 13:43:40 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:38.682 13:43:40 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:38.682 13:43:40 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:14:38.682 13:43:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.682 13:43:40 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:14:38.682 13:43:40 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 Nvme_mlx_0_0n1 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:38.682 13:43:40 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:38.682 13:43:40 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.682 13:43:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.682 13:43:40 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:14:38.682 13:43:40 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:14:38.682 13:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.682 13:43:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 Nvme_mlx_0_1n1 00:14:38.682 13:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.682 13:43:40 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=1138916 00:14:38.682 13:43:40 -- target/device_removal.sh@112 -- # sleep 5 00:14:38.682 13:43:40 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:43.942 13:43:45 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:43.942 13:43:45 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:14:43.942 13:43:45 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/infiniband 00:14:43.942 13:43:45 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:14:43.942 13:43:45 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:14:43.942 13:43:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:43.942 13:43:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:43.942 13:43:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:43.942 13:43:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:43.942 13:43:45 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:14:43.942 13:43:45 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:14:43.942 13:43:45 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0 00:14:43.942 13:43:45 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:43.942 13:43:45 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:43.942 13:43:45 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:43.942 13:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.942 13:43:45 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:43.942 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:14:43.942 13:43:45 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:43.942 13:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.942 mlx5_0 00:14:43.942 13:43:45 -- target/device_removal.sh@78 -- # return 0 00:14:43.942 13:43:45 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@67 -- # echo 1 00:14:43.942 13:43:45 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:43.942 13:43:45 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/net/mlx_0_0/device 00:14:43.942 [2024-04-18 13:43:46.008566] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:14:43.942 [2024-04-18 13:43:46.008750] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:43.942 [2024-04-18 13:43:46.011861] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:43.942 [2024-04-18 13:43:46.011895] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:14:49.232 13:43:51 -- target/device_removal.sh@147 -- # seq 1 10 00:14:49.232 13:43:51 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:49.232 13:43:51 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:49.232 13:43:51 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:49.232 13:43:51 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:49.232 13:43:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.232 13:43:51 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:49.232 13:43:51 -- common/autotest_common.sh@10 -- # set +x 00:14:49.232 13:43:51 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:49.232 13:43:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.232 13:43:51 -- target/device_removal.sh@78 -- # return 1 00:14:49.232 13:43:51 -- target/device_removal.sh@149 -- # break 00:14:49.232 13:43:51 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:49.232 13:43:51 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:49.232 13:43:51 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:49.232 13:43:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.232 13:43:51 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:49.232 13:43:51 -- common/autotest_common.sh@10 -- # set +x 00:14:49.232 13:43:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.232 13:43:51 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:49.232 13:43:51 -- target/device_removal.sh@160 -- # rescan_pci 00:14:49.232 13:43:51 -- target/device_removal.sh@57 -- # echo 1 00:14:50.164 [2024-04-18 13:43:52.673199] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x2330e80, err 11. Skip rescan. 00:14:50.164 13:43:52 -- target/device_removal.sh@162 -- # seq 1 10 00:14:50.164 13:43:52 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:50.164 13:43:52 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net 00:14:50.164 13:43:52 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:14:50.164 13:43:52 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:14:50.164 13:43:52 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:14:50.164 13:43:52 -- target/device_removal.sh@171 -- # break 00:14:50.164 13:43:52 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:14:50.164 13:43:52 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:14:50.422 [2024-04-18 13:43:53.066259] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22566c0/0x2257f20) succeed. 00:14:50.422 [2024-04-18 13:43:53.066341] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:14:53.697 13:43:55 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:14:53.697 13:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.697 13:43:55 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:53.697 13:43:55 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:14:53.697 13:43:55 -- target/device_removal.sh@186 -- # seq 1 10 00:14:53.697 13:43:55 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:53.697 13:43:55 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:53.697 13:43:55 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:53.697 13:43:55 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:53.697 13:43:55 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:53.697 13:43:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.697 13:43:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.697 [2024-04-18 13:43:55.921206] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.697 [2024-04-18 13:43:55.921258] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:14:53.697 [2024-04-18 13:43:55.921278] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:53.697 [2024-04-18 13:43:55.921296] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:53.697 13:43:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.697 13:43:55 -- target/device_removal.sh@187 -- # ib_count=2 00:14:53.697 13:43:55 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:53.697 13:43:55 -- target/device_removal.sh@189 -- # break 00:14:53.697 13:43:55 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:53.697 13:43:55 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:14:53.697 13:43:55 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1/infiniband 00:14:53.697 13:43:55 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:14:53.697 13:43:55 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:14:53.697 13:43:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.697 13:43:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.697 13:43:55 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:14:53.697 13:43:55 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:53.697 13:43:55 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:14:53.697 13:43:55 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1 00:14:53.697 13:43:55 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:53.697 13:43:55 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:53.697 13:43:55 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:53.698 13:43:55 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:53.698 13:43:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.698 13:43:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.698 13:43:55 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:53.698 13:43:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.698 mlx5_1 00:14:53.698 13:43:56 -- target/device_removal.sh@78 -- # return 0 00:14:53.698 13:43:56 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:14:53.698 13:43:56 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:14:53.698 13:43:56 -- target/device_removal.sh@67 -- # echo 1 00:14:53.698 13:43:56 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:14:53.698 13:43:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:53.698 13:43:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:81:00.1/net/mlx_0_1/device 00:14:53.698 [2024-04-18 13:43:56.055792] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:14:53.698 [2024-04-18 13:43:56.055948] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:53.698 [2024-04-18 13:43:56.066041] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:53.698 [2024-04-18 13:43:56.066093] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:15:00.246 13:44:02 -- target/device_removal.sh@147 -- # seq 1 10 00:15:00.246 13:44:02 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:00.246 13:44:02 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:00.246 13:44:02 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:00.246 13:44:02 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:00.246 13:44:02 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:00.246 13:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.246 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:15:00.246 13:44:02 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:00.246 13:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.246 13:44:02 -- target/device_removal.sh@78 -- # return 1 00:15:00.246 13:44:02 -- target/device_removal.sh@149 -- # break 00:15:00.246 13:44:02 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:00.246 13:44:02 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:00.246 13:44:02 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:00.246 13:44:02 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:00.246 13:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.246 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:15:00.246 13:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.246 13:44:02 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:00.246 13:44:02 -- target/device_removal.sh@160 -- # rescan_pci 00:15:00.246 13:44:02 -- target/device_removal.sh@57 -- # echo 1 00:15:00.246 [2024-04-18 13:44:02.986086] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x223f5a0, err 11. Skip rescan. 00:15:00.246 13:44:03 -- target/device_removal.sh@162 -- # seq 1 10 00:15:00.246 13:44:03 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:00.246 13:44:03 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.1/net 00:15:00.246 13:44:03 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:15:00.246 13:44:03 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:15:00.246 13:44:03 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:15:00.246 13:44:03 -- target/device_removal.sh@171 -- # break 00:15:00.246 13:44:03 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:15:00.246 13:44:03 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:15:00.813 [2024-04-18 13:44:03.382613] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2256d40/0x22995b0) succeed. 00:15:00.813 [2024-04-18 13:44:03.382752] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:15:04.092 13:44:06 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:15:04.092 13:44:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:04.092 13:44:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:04.092 13:44:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:04.092 13:44:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:04.092 13:44:06 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:04.092 13:44:06 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:15:04.092 13:44:06 -- target/device_removal.sh@186 -- # seq 1 10 00:15:04.092 13:44:06 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:04.092 13:44:06 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:04.092 13:44:06 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:04.092 13:44:06 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:04.092 13:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.092 13:44:06 -- common/autotest_common.sh@10 -- # set +x 00:15:04.092 13:44:06 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:04.092 [2024-04-18 13:44:06.325980] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:04.092 [2024-04-18 13:44:06.326046] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:15:04.092 [2024-04-18 13:44:06.326070] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:04.092 [2024-04-18 13:44:06.326086] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:04.092 13:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.092 13:44:06 -- target/device_removal.sh@187 -- # ib_count=2 00:15:04.092 13:44:06 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:04.092 13:44:06 -- target/device_removal.sh@189 -- # break 00:15:04.092 13:44:06 -- target/device_removal.sh@200 -- # stop_bdevperf 00:15:04.092 13:44:06 -- target/device_removal.sh@116 -- # wait 1138916 00:16:11.773 0 00:16:11.773 13:45:11 -- target/device_removal.sh@118 -- # killprocess 1138776 00:16:11.773 13:45:11 -- common/autotest_common.sh@936 -- # '[' -z 1138776 ']' 00:16:11.773 13:45:11 -- common/autotest_common.sh@940 -- # kill -0 1138776 00:16:11.773 13:45:11 -- common/autotest_common.sh@941 -- # uname 00:16:11.773 13:45:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.773 13:45:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1138776 00:16:11.773 13:45:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:11.773 13:45:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:11.773 13:45:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1138776' 00:16:11.773 killing process with pid 1138776 00:16:11.773 13:45:11 -- common/autotest_common.sh@955 -- # kill 1138776 00:16:11.773 13:45:11 -- common/autotest_common.sh@960 -- # wait 1138776 00:16:11.773 13:45:11 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:16:11.773 13:45:11 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:11.773 [2024-04-18 13:43:40.365539] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:11.773 [2024-04-18 13:43:40.365657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138776 ] 00:16:11.773 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.773 [2024-04-18 13:43:40.453419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.773 [2024-04-18 13:43:40.573953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.773 Running I/O for 90 seconds... 00:16:11.773 [2024-04-18 13:43:46.013250] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:11.773 [2024-04-18 13:43:46.013297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.773 [2024-04-18 13:43:46.013319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.773 [2024-04-18 13:43:46.013338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.773 [2024-04-18 13:43:46.013353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.773 [2024-04-18 13:43:46.013370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.773 [2024-04-18 13:43:46.013385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.773 [2024-04-18 13:43:46.013401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.773 [2024-04-18 13:43:46.013416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.773 [2024-04-18 13:43:46.015780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.773 [2024-04-18 13:43:46.015811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.773 [2024-04-18 13:43:46.015859] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:11.773 [2024-04-18 13:43:46.023248] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.033269] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.043284] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.053312] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.063340] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.073370] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.083399] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.093716] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.103739] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.113780] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.124020] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.134448] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.144474] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.154501] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.165401] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.175490] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.185517] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.196402] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.206429] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.216454] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.226530] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.236827] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.247126] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.257154] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.267448] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.277475] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.287500] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.297528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.307712] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.317724] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.328024] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.338945] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.349161] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.359188] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.369637] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.380458] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.390562] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.400586] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.410912] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.421735] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.431762] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.441928] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.773 [2024-04-18 13:43:46.451967] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.465124] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.475181] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.485213] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.495239] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.506225] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.516957] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.527156] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.537205] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.548021] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.558551] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.568576] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.578602] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.589277] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.599771] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.609798] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.619823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.630524] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.643514] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.653537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.664117] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.674144] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.684227] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.694745] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.706316] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.716340] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.726369] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.737614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.747793] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.758132] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.768156] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.778766] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.789167] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.799191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.809648] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.820503] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.830674] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.840768] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.850971] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.860991] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.871085] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.881112] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.891136] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.902023] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.912479] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.922523] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.933440] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.944052] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.954203] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.964492] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.975614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.985812] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:46.995826] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:47.005856] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:47.016020] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.774 [2024-04-18 13:43:47.018341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.774 [2024-04-18 13:43:47.018757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.774 [2024-04-18 13:43:47.018772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.018982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.018999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.775 [2024-04-18 13:43:47.019963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.775 [2024-04-18 13:43:47.019979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.019996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.020966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.020990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.776 [2024-04-18 13:43:47.021178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.776 [2024-04-18 13:43:47.021197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.777 [2024-04-18 13:43:47.021212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.777 [2024-04-18 13:43:47.021244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.777 [2024-04-18 13:43:47.021276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.777 [2024-04-18 13:43:47.021308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.021977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.021993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:16:11.777 [2024-04-18 13:43:47.022230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.777 [2024-04-18 13:43:47.022247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.022650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:16:11.778 [2024-04-18 13:43:47.022666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.039658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:11.778 [2024-04-18 13:43:47.039688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:11.778 [2024-04-18 13:43:47.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:16:11.778 [2024-04-18 13:43:47.039719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.778 [2024-04-18 13:43:47.043307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.778 [2024-04-18 13:43:47.043725] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.778 [2024-04-18 13:43:47.043753] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.778 [2024-04-18 13:43:47.043767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.778 [2024-04-18 13:43:47.043797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.778 [2024-04-18 13:43:47.043815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.778 [2024-04-18 13:43:47.043836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.778 [2024-04-18 13:43:47.043852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.778 [2024-04-18 13:43:47.043867] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.778 [2024-04-18 13:43:47.043909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.778 [2024-04-18 13:43:47.043929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.778 [2024-04-18 13:43:48.047727] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.778 [2024-04-18 13:43:48.047800] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.778 [2024-04-18 13:43:48.047815] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.778 [2024-04-18 13:43:48.047849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.778 [2024-04-18 13:43:48.047866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.778 [2024-04-18 13:43:48.047888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.778 [2024-04-18 13:43:48.047903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.778 [2024-04-18 13:43:48.047918] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.778 [2024-04-18 13:43:48.047966] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.778 [2024-04-18 13:43:48.047988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.778 [2024-04-18 13:43:49.052554] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.778 [2024-04-18 13:43:49.052612] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.778 [2024-04-18 13:43:49.052639] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.778 [2024-04-18 13:43:49.052673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.778 [2024-04-18 13:43:49.052689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.778 [2024-04-18 13:43:49.052718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.778 [2024-04-18 13:43:49.052733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.778 [2024-04-18 13:43:49.052747] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.778 [2024-04-18 13:43:49.052783] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.778 [2024-04-18 13:43:49.052801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.778 [2024-04-18 13:43:51.057783] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.778 [2024-04-18 13:43:51.057838] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.778 [2024-04-18 13:43:51.057892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.778 [2024-04-18 13:43:51.057918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.778 [2024-04-18 13:43:51.057945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.778 [2024-04-18 13:43:51.057962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.778 [2024-04-18 13:43:51.057977] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.779 [2024-04-18 13:43:51.058024] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.779 [2024-04-18 13:43:51.058043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.779 [2024-04-18 13:43:53.063035] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.779 [2024-04-18 13:43:53.063074] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.779 [2024-04-18 13:43:53.063126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.779 [2024-04-18 13:43:53.063142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.779 [2024-04-18 13:43:53.063180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.779 [2024-04-18 13:43:53.063194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.779 [2024-04-18 13:43:53.063217] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.779 [2024-04-18 13:43:53.063272] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.779 [2024-04-18 13:43:53.063291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.779 [2024-04-18 13:43:55.068305] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.779 [2024-04-18 13:43:55.068347] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:11.779 [2024-04-18 13:43:55.068406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.779 [2024-04-18 13:43:55.068437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:11.779 [2024-04-18 13:43:55.068457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:11.779 [2024-04-18 13:43:55.068470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:11.779 [2024-04-18 13:43:55.068484] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:11.779 [2024-04-18 13:43:55.068520] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.779 [2024-04-18 13:43:55.068537] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:11.779 [2024-04-18 13:43:56.057603] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:11.779 [2024-04-18 13:43:56.057646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.779 [2024-04-18 13:43:56.057665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.779 [2024-04-18 13:43:56.057681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.779 [2024-04-18 13:43:56.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.779 [2024-04-18 13:43:56.057708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.779 [2024-04-18 13:43:56.057722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.779 [2024-04-18 13:43:56.057736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.779 [2024-04-18 13:43:56.057749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32628 cdw0:16 sqhd:75dc p:0 m:0 dnr:0 00:16:11.779 [2024-04-18 13:43:56.071209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.779 [2024-04-18 13:43:56.071273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.779 [2024-04-18 13:43:56.071593] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:11.779 [2024-04-18 13:43:56.071674] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.081667] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.091690] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.116272] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.126225] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.131254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.779 [2024-04-18 13:43:56.136251] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.146275] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.156300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.166328] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.176353] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.186379] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.196403] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.206430] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.216457] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.226484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.236510] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.246535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.256564] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.266590] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.276616] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.286641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.296668] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.306694] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.316722] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.326750] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.336775] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.346804] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.356830] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.366856] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.376882] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.386906] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.396943] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.406967] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.416993] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.427018] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.437043] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.447069] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.457096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.467124] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.477152] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.487180] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.497208] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.507234] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.517260] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.527285] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.537311] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.547335] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.557363] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.779 [2024-04-18 13:43:56.567389] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.577415] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.587442] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.597468] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.607496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.617521] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.627546] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.637573] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.647602] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.657630] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.667655] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.677680] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.687705] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.697733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.707761] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.717786] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.727812] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.737840] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.747865] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.757893] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.767918] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.777953] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.787978] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.798004] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.808034] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.818059] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.828085] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.838112] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.848141] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.858169] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.868194] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.878219] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.888247] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.898273] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.908300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.918328] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.928354] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.938380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.948407] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.958435] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.968462] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.978489] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.988514] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:56.998540] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.008567] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.018593] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.028621] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.038645] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.048671] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.058698] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.068727] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:11.780 [2024-04-18 13:43:57.074739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.074967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.074994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.780 [2024-04-18 13:43:57.075165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf0ef 00:16:11.780 [2024-04-18 13:43:57.075178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf0ef 00:16:11.781 [2024-04-18 13:43:57.075653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.075977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.781 [2024-04-18 13:43:57.076216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.781 [2024-04-18 13:43:57.076229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.076976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.076991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.077004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.077019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.077032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.077046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.077059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.077073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.077086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.077101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.782 [2024-04-18 13:43:57.077118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.782 [2024-04-18 13:43:57.077133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.077987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.783 [2024-04-18 13:43:57.078151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.783 [2024-04-18 13:43:57.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.078556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.784 [2024-04-18 13:43:57.078570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32628 cdw0:27f05f20 sqhd:db93 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.093474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:11.784 [2024-04-18 13:43:57.093496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:11.784 [2024-04-18 13:43:57.093523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47888 len:8 PRP1 0x0 PRP2 0x0 00:16:11.784 [2024-04-18 13:43:57.093536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.784 [2024-04-18 13:43:57.093612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.784 [2024-04-18 13:43:57.093980] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.784 [2024-04-18 13:43:57.094004] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.784 [2024-04-18 13:43:57.094017] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.784 [2024-04-18 13:43:57.094044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.784 [2024-04-18 13:43:57.094059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.784 [2024-04-18 13:43:57.094078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.784 [2024-04-18 13:43:57.094092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.784 [2024-04-18 13:43:57.094105] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.784 [2024-04-18 13:43:57.094138] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.784 [2024-04-18 13:43:57.094155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.784 [2024-04-18 13:43:58.098412] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.784 [2024-04-18 13:43:58.098464] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.784 [2024-04-18 13:43:58.098492] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.784 [2024-04-18 13:43:58.098525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.784 [2024-04-18 13:43:58.098541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.784 [2024-04-18 13:43:58.098564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.784 [2024-04-18 13:43:58.098579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.784 [2024-04-18 13:43:58.098592] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.784 [2024-04-18 13:43:58.098631] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.784 [2024-04-18 13:43:58.098649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.784 [2024-04-18 13:43:59.101198] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:11.784 [2024-04-18 13:43:59.101270] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.784 [2024-04-18 13:43:59.101290] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.784 [2024-04-18 13:43:59.101347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.784 [2024-04-18 13:43:59.101363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.784 [2024-04-18 13:43:59.101382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.784 [2024-04-18 13:43:59.101396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.784 [2024-04-18 13:43:59.101409] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.784 [2024-04-18 13:43:59.101446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.784 [2024-04-18 13:43:59.101464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.784 [2024-04-18 13:44:01.107598] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.784 [2024-04-18 13:44:01.107651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.784 [2024-04-18 13:44:01.107704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.784 [2024-04-18 13:44:01.107722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.784 [2024-04-18 13:44:01.107744] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.784 [2024-04-18 13:44:01.107759] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.784 [2024-04-18 13:44:01.107773] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.784 [2024-04-18 13:44:01.108122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.784 [2024-04-18 13:44:01.108148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.784 [2024-04-18 13:44:03.114430] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.784 [2024-04-18 13:44:03.114486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.784 [2024-04-18 13:44:03.114539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.784 [2024-04-18 13:44:03.114555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.784 [2024-04-18 13:44:03.115074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.784 [2024-04-18 13:44:03.115098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.784 [2024-04-18 13:44:03.115114] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.784 [2024-04-18 13:44:03.115180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.784 [2024-04-18 13:44:03.115216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.785 [2024-04-18 13:44:05.122743] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.785 [2024-04-18 13:44:05.122807] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.785 [2024-04-18 13:44:05.122859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.785 [2024-04-18 13:44:05.122875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.785 [2024-04-18 13:44:05.122903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.785 [2024-04-18 13:44:05.122932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.785 [2024-04-18 13:44:05.122956] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.785 [2024-04-18 13:44:05.123000] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.785 [2024-04-18 13:44:05.123019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.785 [2024-04-18 13:44:07.129880] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:11.785 [2024-04-18 13:44:07.129971] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:11.785 [2024-04-18 13:44:07.130043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:11.785 [2024-04-18 13:44:07.130060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:11.785 [2024-04-18 13:44:07.130085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:11.785 [2024-04-18 13:44:07.130101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:11.785 [2024-04-18 13:44:07.130116] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:11.785 [2024-04-18 13:44:07.130176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:11.785 [2024-04-18 13:44:07.130195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:11.785 [2024-04-18 13:44:08.187045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.785 00:16:11.785 Latency(us) 00:16:11.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.785 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:11.785 Verification LBA range: start 0x0 length 0x8000 00:16:11.785 Nvme_mlx_0_0n1 : 90.01 9209.69 35.98 0.00 0.00 13873.18 2487.94 11085390.13 00:16:11.785 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:11.785 Verification LBA range: start 0x0 length 0x8000 00:16:11.785 Nvme_mlx_0_1n1 : 90.01 7902.39 30.87 0.00 0.00 16175.71 2949.12 13123511.18 00:16:11.785 =================================================================================================================== 00:16:11.785 Total : 17112.08 66.84 0.00 0.00 14936.52 2487.94 13123511.18 00:16:11.785 Received shutdown signal, test time was about 90.000000 seconds 00:16:11.785 00:16:11.785 Latency(us) 00:16:11.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.785 =================================================================================================================== 00:16:11.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:11.785 13:45:11 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:16:11.785 13:45:11 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:11.785 13:45:11 -- target/device_removal.sh@202 -- # killprocess 1138609 00:16:11.785 13:45:11 -- common/autotest_common.sh@936 -- # '[' -z 1138609 ']' 00:16:11.785 13:45:11 -- common/autotest_common.sh@940 -- # kill -0 1138609 00:16:11.785 13:45:11 -- common/autotest_common.sh@941 -- # uname 00:16:11.785 13:45:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.785 13:45:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1138609 00:16:11.785 13:45:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.785 13:45:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.785 13:45:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1138609' 00:16:11.785 killing process with pid 1138609 00:16:11.785 13:45:11 -- common/autotest_common.sh@955 -- # kill 1138609 00:16:11.785 13:45:11 -- common/autotest_common.sh@960 -- # wait 1138609 00:16:11.785 13:45:12 -- target/device_removal.sh@203 -- # nvmfpid= 00:16:11.785 13:45:12 -- target/device_removal.sh@205 -- # return 0 00:16:11.785 00:16:11.785 real 1m32.663s 00:16:11.785 user 4m23.609s 00:16:11.785 sys 0m3.615s 00:16:11.785 13:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.785 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:16:11.785 ************************************ 00:16:11.785 END TEST nvmf_device_removal_pci_remove 00:16:11.785 ************************************ 00:16:11.785 13:45:12 -- target/device_removal.sh@317 -- # nvmftestfini 00:16:11.785 13:45:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:11.785 13:45:12 -- nvmf/common.sh@117 -- # sync 00:16:11.785 13:45:12 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:11.785 13:45:12 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:11.785 13:45:12 -- nvmf/common.sh@120 -- # set +e 00:16:11.785 13:45:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.785 13:45:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:11.785 rmmod nvme_rdma 00:16:11.785 rmmod nvme_fabrics 00:16:11.785 13:45:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.785 13:45:12 -- nvmf/common.sh@124 -- # set -e 00:16:11.785 13:45:12 -- nvmf/common.sh@125 -- # return 0 00:16:11.785 13:45:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:11.785 13:45:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:11.785 13:45:12 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:11.785 13:45:12 -- target/device_removal.sh@318 -- # clean_bond_device 00:16:11.785 13:45:12 -- target/device_removal.sh@240 -- # ip link 00:16:11.785 13:45:12 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:16:11.785 00:16:11.785 real 3m8.457s 00:16:11.785 user 8m48.284s 00:16:11.785 sys 0m9.285s 00:16:11.785 13:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.785 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:16:11.785 ************************************ 00:16:11.785 END TEST nvmf_device_removal 00:16:11.785 ************************************ 00:16:11.785 13:45:12 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:11.785 13:45:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.785 13:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.785 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:16:11.785 ************************************ 00:16:11.785 START TEST nvmf_srq_overwhelm 00:16:11.785 ************************************ 00:16:11.785 13:45:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:11.785 * Looking for test storage... 00:16:11.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:11.785 13:45:12 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.785 13:45:12 -- nvmf/common.sh@7 -- # uname -s 00:16:11.785 13:45:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.785 13:45:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.785 13:45:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.785 13:45:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.785 13:45:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.785 13:45:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.785 13:45:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.785 13:45:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.785 13:45:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.785 13:45:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.785 13:45:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:16:11.785 13:45:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:16:11.785 13:45:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.785 13:45:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.785 13:45:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.785 13:45:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.785 13:45:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:11.785 13:45:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.785 13:45:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.785 13:45:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.786 13:45:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.786 13:45:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.786 13:45:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.786 13:45:12 -- paths/export.sh@5 -- # export PATH 00:16:11.786 13:45:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.786 13:45:12 -- nvmf/common.sh@47 -- # : 0 00:16:11.786 13:45:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.786 13:45:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.786 13:45:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.786 13:45:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.786 13:45:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.786 13:45:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.786 13:45:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.786 13:45:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.786 13:45:12 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.786 13:45:12 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.786 13:45:12 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:16:11.786 13:45:12 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:16:11.786 13:45:12 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:11.786 13:45:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.786 13:45:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:11.786 13:45:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:11.786 13:45:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:11.786 13:45:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.786 13:45:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.786 13:45:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.786 13:45:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:11.786 13:45:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:11.786 13:45:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.786 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:16:12.725 13:45:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:12.725 13:45:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.725 13:45:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.725 13:45:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.725 13:45:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.725 13:45:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.725 13:45:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.725 13:45:15 -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.725 13:45:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.725 13:45:15 -- nvmf/common.sh@296 -- # e810=() 00:16:12.725 13:45:15 -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.725 13:45:15 -- nvmf/common.sh@297 -- # x722=() 00:16:12.725 13:45:15 -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.725 13:45:15 -- nvmf/common.sh@298 -- # mlx=() 00:16:12.725 13:45:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.725 13:45:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.725 13:45:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.725 13:45:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.725 13:45:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:12.725 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:12.725 13:45:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:12.725 13:45:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.725 13:45:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:12.725 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:12.725 13:45:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:12.725 13:45:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.725 13:45:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.725 13:45:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.725 13:45:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:12.725 13:45:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.725 13:45:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:12.725 Found net devices under 0000:81:00.0: mlx_0_0 00:16:12.725 13:45:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.725 13:45:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.725 13:45:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:12.725 13:45:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.725 13:45:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:12.725 Found net devices under 0000:81:00.1: mlx_0_1 00:16:12.725 13:45:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.725 13:45:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:12.725 13:45:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:12.725 13:45:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:12.725 13:45:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:12.725 13:45:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:12.725 13:45:15 -- nvmf/common.sh@58 -- # uname 00:16:12.725 13:45:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:12.725 13:45:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:12.725 13:45:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:12.725 13:45:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:12.725 13:45:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:12.725 13:45:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:12.726 13:45:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:12.726 13:45:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:12.726 13:45:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:12.726 13:45:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:12.726 13:45:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:12.726 13:45:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:12.726 13:45:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:12.726 13:45:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:12.726 13:45:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:12.726 13:45:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:12.726 13:45:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@105 -- # continue 2 00:16:12.726 13:45:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@105 -- # continue 2 00:16:12.726 13:45:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:12.726 13:45:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:12.726 13:45:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:12.726 13:45:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:12.726 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:12.726 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:12.726 altname enp129s0f0np0 00:16:12.726 inet 192.168.100.8/24 scope global mlx_0_0 00:16:12.726 valid_lft forever preferred_lft forever 00:16:12.726 13:45:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:12.726 13:45:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:12.726 13:45:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:12.726 13:45:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:12.726 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:12.726 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:12.726 altname enp129s0f1np1 00:16:12.726 inet 192.168.100.9/24 scope global mlx_0_1 00:16:12.726 valid_lft forever preferred_lft forever 00:16:12.726 13:45:15 -- nvmf/common.sh@411 -- # return 0 00:16:12.726 13:45:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:12.726 13:45:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:12.726 13:45:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:12.726 13:45:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:12.726 13:45:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:12.726 13:45:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:12.726 13:45:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:12.726 13:45:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:12.726 13:45:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:12.726 13:45:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@105 -- # continue 2 00:16:12.726 13:45:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:12.726 13:45:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:12.726 13:45:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@105 -- # continue 2 00:16:12.726 13:45:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:12.726 13:45:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:12.726 13:45:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:12.726 13:45:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:12.726 13:45:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:12.726 13:45:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:12.726 192.168.100.9' 00:16:12.726 13:45:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:12.726 192.168.100.9' 00:16:12.726 13:45:15 -- nvmf/common.sh@446 -- # head -n 1 00:16:12.726 13:45:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:12.726 13:45:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:12.726 192.168.100.9' 00:16:12.726 13:45:15 -- nvmf/common.sh@447 -- # tail -n +2 00:16:12.726 13:45:15 -- nvmf/common.sh@447 -- # head -n 1 00:16:12.726 13:45:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:12.726 13:45:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:12.726 13:45:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:12.726 13:45:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:12.726 13:45:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:12.726 13:45:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:12.726 13:45:15 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:16:12.726 13:45:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:12.726 13:45:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:12.726 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:12.726 13:45:15 -- nvmf/common.sh@470 -- # nvmfpid=1152283 00:16:12.726 13:45:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.726 13:45:15 -- nvmf/common.sh@471 -- # waitforlisten 1152283 00:16:12.726 13:45:15 -- common/autotest_common.sh@817 -- # '[' -z 1152283 ']' 00:16:12.726 13:45:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.726 13:45:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.726 13:45:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.726 13:45:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.726 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:12.726 [2024-04-18 13:45:15.506269] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:12.726 [2024-04-18 13:45:15.506374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.984 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.984 [2024-04-18 13:45:15.586205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.984 [2024-04-18 13:45:15.709045] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.984 [2024-04-18 13:45:15.709106] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.984 [2024-04-18 13:45:15.709122] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.984 [2024-04-18 13:45:15.709136] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.984 [2024-04-18 13:45:15.709147] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.984 [2024-04-18 13:45:15.709214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.984 [2024-04-18 13:45:15.709270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.984 [2024-04-18 13:45:15.709322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.984 [2024-04-18 13:45:15.709325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.242 13:45:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.242 13:45:15 -- common/autotest_common.sh@850 -- # return 0 00:16:13.242 13:45:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:13.242 13:45:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:13.242 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 13:45:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.242 13:45:15 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:16:13.242 13:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.242 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 [2024-04-18 13:45:15.908821] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x148e090/0x1492580) succeed. 00:16:13.242 [2024-04-18 13:45:15.921176] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x148f680/0x14d3c10) succeed. 00:16:13.242 13:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.242 13:45:15 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:16:13.242 13:45:15 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:13.242 13:45:15 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:16:13.242 13:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.242 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 13:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.242 13:45:15 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:13.242 13:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.242 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 Malloc0 00:16:13.242 13:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.242 13:45:16 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:13.242 13:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.242 13:45:16 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 13:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.242 13:45:16 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:13.242 13:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.242 13:45:16 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 [2024-04-18 13:45:16.032983] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:13.242 13:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.242 13:45:16 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:16:14.612 13:45:17 -- common/autotest_common.sh@1221 -- # local i=0 00:16:14.612 13:45:17 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:14.612 13:45:17 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:16:14.612 13:45:17 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:14.612 13:45:17 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:16:14.612 13:45:17 -- common/autotest_common.sh@1232 -- # return 0 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.612 13:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.612 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.612 13:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:14.612 13:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.612 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.612 Malloc1 00:16:14.612 13:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.612 13:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.612 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.612 13:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:14.612 13:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.612 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.612 13:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.612 13:45:17 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:16:15.982 13:45:18 -- common/autotest_common.sh@1221 -- # local i=0 00:16:15.982 13:45:18 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:15.982 13:45:18 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:16:15.982 13:45:18 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:15.982 13:45:18 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:16:15.982 13:45:18 -- common/autotest_common.sh@1232 -- # return 0 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:15.982 13:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.982 13:45:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.982 13:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:15.982 13:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.982 13:45:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.982 Malloc2 00:16:15.982 13:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:15.982 13:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.982 13:45:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.982 13:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:15.982 13:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.982 13:45:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.982 13:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.982 13:45:18 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:16:16.914 13:45:19 -- common/autotest_common.sh@1221 -- # local i=0 00:16:16.914 13:45:19 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:16.914 13:45:19 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:16:16.914 13:45:19 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:16.914 13:45:19 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:16:16.914 13:45:19 -- common/autotest_common.sh@1232 -- # return 0 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:16.914 13:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.914 13:45:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.914 13:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:16.914 13:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.914 13:45:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.914 Malloc3 00:16:16.914 13:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:16.914 13:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.914 13:45:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.914 13:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:16.914 13:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.914 13:45:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.914 13:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.914 13:45:19 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:18.285 13:45:20 -- common/autotest_common.sh@1221 -- # local i=0 00:16:18.285 13:45:20 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:18.285 13:45:20 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:16:18.285 13:45:20 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:18.285 13:45:20 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:16:18.285 13:45:20 -- common/autotest_common.sh@1232 -- # return 0 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:18.285 13:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.285 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 13:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:18.285 13:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.285 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 Malloc4 00:16:18.285 13:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:18.285 13:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.285 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 13:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:18.285 13:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.285 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.285 13:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.285 13:45:20 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:19.216 13:45:21 -- common/autotest_common.sh@1221 -- # local i=0 00:16:19.216 13:45:21 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:19.216 13:45:21 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:16:19.216 13:45:21 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:19.216 13:45:21 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:16:19.216 13:45:21 -- common/autotest_common.sh@1232 -- # return 0 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:19.216 13:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.216 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 13:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:19.216 13:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.216 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 Malloc5 00:16:19.216 13:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:19.216 13:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.216 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 13:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:16:19.216 13:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.216 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:16:19.216 13:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.216 13:45:21 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:16:20.586 13:45:23 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:20.586 13:45:23 -- common/autotest_common.sh@1221 -- # local i=0 00:16:20.586 13:45:23 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:16:20.586 13:45:23 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:20.586 13:45:23 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:20.586 13:45:23 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:16:20.586 13:45:23 -- common/autotest_common.sh@1232 -- # return 0 00:16:20.586 13:45:23 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:20.586 [global] 00:16:20.586 thread=1 00:16:20.586 invalidate=1 00:16:20.586 rw=read 00:16:20.586 time_based=1 00:16:20.586 runtime=10 00:16:20.586 ioengine=libaio 00:16:20.586 direct=1 00:16:20.586 bs=1048576 00:16:20.586 iodepth=128 00:16:20.586 norandommap=1 00:16:20.586 numjobs=13 00:16:20.586 00:16:20.586 [job0] 00:16:20.586 filename=/dev/nvme0n1 00:16:20.586 [job1] 00:16:20.586 filename=/dev/nvme1n1 00:16:20.586 [job2] 00:16:20.586 filename=/dev/nvme2n1 00:16:20.586 [job3] 00:16:20.586 filename=/dev/nvme3n1 00:16:20.586 [job4] 00:16:20.586 filename=/dev/nvme4n1 00:16:20.586 [job5] 00:16:20.586 filename=/dev/nvme5n1 00:16:20.586 Could not set queue depth (nvme0n1) 00:16:20.586 Could not set queue depth (nvme1n1) 00:16:20.586 Could not set queue depth (nvme2n1) 00:16:20.586 Could not set queue depth (nvme3n1) 00:16:20.586 Could not set queue depth (nvme4n1) 00:16:20.586 Could not set queue depth (nvme5n1) 00:16:20.848 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:20.848 ... 00:16:20.848 fio-3.35 00:16:20.848 Starting 78 threads 00:16:35.717 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153321: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=1, BW=1153KiB/s (1181kB/s)(16.0MiB/14210msec) 00:16:35.717 slat (usec): min=1023, max=4182.2k, avg=627178.95, stdev=1203363.60 00:16:35.717 clat (msec): min=4174, max=14177, avg=10372.03, stdev=3690.75 00:16:35.717 lat (msec): min=6311, max=14209, avg=10999.21, stdev=3409.25 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 6342], 00:16:35.717 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:16:35.717 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14160], 00:16:35.717 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.717 | 99.99th=[14160] 00:16:35.717 lat (msec) : >=2000=100.00% 00:16:35.717 cpu : usr=0.00%, sys=0.06%, ctx=45, majf=0, minf=4097 00:16:35.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153322: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=2, BW=2438KiB/s (2497kB/s)(34.0MiB/14278msec) 00:16:35.717 slat (usec): min=522, max=4310.7k, avg=358789.04, stdev=1076632.76 00:16:35.717 clat (msec): min=2078, max=14173, avg=12838.51, stdev=2975.91 00:16:35.717 lat (msec): min=6388, max=14277, avg=13197.30, stdev=2297.28 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 2072], 5.00th=[ 6409], 10.00th=[ 6409], 20.00th=[12818], 00:16:35.717 | 30.00th=[14026], 40.00th=[14026], 50.00th=[14026], 60.00th=[14160], 00:16:35.717 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.717 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.717 | 99.99th=[14160] 00:16:35.717 lat (msec) : >=2000=100.00% 00:16:35.717 cpu : usr=0.00%, sys=0.14%, ctx=34, majf=0, minf=8705 00:16:35.717 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.717 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153323: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=20, BW=20.6MiB/s (21.6MB/s)(294MiB/14287msec) 00:16:35.717 slat (usec): min=69, max=6397.9k, avg=41518.60, stdev=449497.65 00:16:35.717 clat (msec): min=307, max=14278, avg=6004.15, stdev=6419.33 00:16:35.717 lat (msec): min=310, max=14280, avg=6045.67, stdev=6430.36 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 309], 5.00th=[ 321], 10.00th=[ 326], 20.00th=[ 330], 00:16:35.717 | 30.00th=[ 342], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[13355], 00:16:35.717 | 70.00th=[13355], 80.00th=[13489], 90.00th=[13489], 95.00th=[13624], 00:16:35.717 | 99.00th=[13624], 99.50th=[14026], 99.90th=[14295], 99.95th=[14295], 00:16:35.717 | 99.99th=[14295] 00:16:35.717 bw ( KiB/s): min= 1454, max=327680, per=4.17%, avg=85355.50, stdev=161564.79, samples=4 00:16:35.717 iops : min= 1, max= 320, avg=83.25, stdev=157.85, samples=4 00:16:35.717 lat (msec) : 500=53.06%, 750=1.36%, 1000=1.02%, >=2000=44.56% 00:16:35.717 cpu : usr=0.00%, sys=0.73%, ctx=613, majf=0, minf=32769 00:16:35.717 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.9%, >=64=78.6% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:35.717 issued rwts: total=294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153324: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=1, BW=1806KiB/s (1849kB/s)(25.0MiB/14176msec) 00:16:35.717 slat (usec): min=1756, max=2134.0k, avg=400288.06, stdev=796185.54 00:16:35.717 clat (msec): min=4168, max=14174, avg=9806.25, stdev=3554.59 00:16:35.717 lat (msec): min=4178, max=14175, avg=10206.54, stdev=3455.34 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 6275], 20.00th=[ 6342], 00:16:35.717 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[10671], 60.00th=[10671], 00:16:35.717 | 70.00th=[12818], 80.00th=[12818], 90.00th=[14160], 95.00th=[14160], 00:16:35.717 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.717 | 99.99th=[14160] 00:16:35.717 lat (msec) : >=2000=100.00% 00:16:35.717 cpu : usr=0.00%, sys=0.12%, ctx=50, majf=0, minf=6401 00:16:35.717 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.717 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153325: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=1, BW=1800KiB/s (1843kB/s)(25.0MiB/14221msec) 00:16:35.717 slat (usec): min=554, max=2144.6k, avg=401706.90, stdev=801039.20 00:16:35.717 clat (msec): min=4177, max=14217, avg=10912.57, stdev=3476.05 00:16:35.717 lat (msec): min=6300, max=14220, avg=11314.28, stdev=3237.45 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 4178], 5.00th=[ 6275], 10.00th=[ 6342], 20.00th=[ 6342], 00:16:35.717 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:16:35.717 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:16:35.717 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.717 | 99.99th=[14160] 00:16:35.717 lat (msec) : >=2000=100.00% 00:16:35.717 cpu : usr=0.00%, sys=0.13%, ctx=45, majf=0, minf=6401 00:16:35.717 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.717 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153326: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=14, BW=15.0MiB/s (15.7MB/s)(214MiB/14287msec) 00:16:35.717 slat (usec): min=51, max=2158.1k, avg=46953.72, stdev=269077.16 00:16:35.717 clat (msec): min=674, max=14115, avg=6513.82, stdev=3850.76 00:16:35.717 lat (msec): min=695, max=14143, avg=6560.78, stdev=3876.74 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 693], 5.00th=[ 760], 10.00th=[ 776], 20.00th=[ 818], 00:16:35.717 | 30.00th=[ 3910], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 9731], 00:16:35.717 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:16:35.717 | 99.00th=[12818], 99.50th=[12818], 99.90th=[14160], 99.95th=[14160], 00:16:35.717 | 99.99th=[14160] 00:16:35.717 bw ( KiB/s): min= 2019, max=94019, per=1.74%, avg=35591.60, stdev=37768.70, samples=5 00:16:35.717 iops : min= 1, max= 91, avg=34.40, stdev=36.79, samples=5 00:16:35.717 lat (msec) : 750=1.87%, 1000=20.09%, 2000=0.47%, >=2000=77.57% 00:16:35.717 cpu : usr=0.01%, sys=0.60%, ctx=177, majf=0, minf=32769 00:16:35.717 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.5%, 32=15.0%, >=64=70.6% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:35.717 issued rwts: total=214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.717 job0: (groupid=0, jobs=1): err= 0: pid=1153327: Thu Apr 18 13:45:38 2024 00:16:35.717 read: IOPS=2, BW=2228KiB/s (2282kB/s)(31.0MiB/14247msec) 00:16:35.717 slat (usec): min=484, max=6364.5k, avg=392184.00, stdev=1360593.82 00:16:35.717 clat (msec): min=2088, max=14242, avg=11632.26, stdev=3809.22 00:16:35.717 lat (msec): min=6361, max=14246, avg=12024.44, stdev=3397.51 00:16:35.717 clat percentiles (msec): 00:16:35.717 | 1.00th=[ 2089], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[ 6409], 00:16:35.717 | 30.00th=[12818], 40.00th=[12818], 50.00th=[14295], 60.00th=[14295], 00:16:35.717 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.717 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.717 | 99.99th=[14295] 00:16:35.717 lat (msec) : >=2000=100.00% 00:16:35.717 cpu : usr=0.00%, sys=0.13%, ctx=41, majf=0, minf=7937 00:16:35.717 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:16:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.717 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.718 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153328: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=120, BW=120MiB/s (126MB/s)(1723MiB/14357msec) 00:16:35.718 slat (usec): min=70, max=4811.1k, avg=5861.15, stdev=126323.72 00:16:35.718 clat (msec): min=116, max=11366, avg=1041.25, stdev=2857.07 00:16:35.718 lat (msec): min=117, max=11367, avg=1047.11, stdev=2866.40 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 124], 00:16:35.718 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 184], 60.00th=[ 255], 00:16:35.718 | 70.00th=[ 268], 80.00th=[ 296], 90.00th=[ 667], 95.00th=[11342], 00:16:35.718 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:16:35.718 | 99.99th=[11342] 00:16:35.718 bw ( KiB/s): min= 1917, max=1050624, per=15.98%, avg=326863.20, stdev=371804.79, samples=10 00:16:35.718 iops : min= 1, max= 1026, avg=319.10, stdev=363.19, samples=10 00:16:35.718 lat (msec) : 250=54.21%, 500=30.64%, 750=7.37%, >=2000=7.78% 00:16:35.718 cpu : usr=0.08%, sys=1.78%, ctx=1768, majf=0, minf=32769 00:16:35.718 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.718 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153329: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=3, BW=3879KiB/s (3972kB/s)(54.0MiB/14254msec) 00:16:35.718 slat (usec): min=556, max=2137.5k, avg=186295.97, stdev=572293.70 00:16:35.718 clat (msec): min=4192, max=14247, avg=12230.90, stdev=3076.43 00:16:35.718 lat (msec): min=6313, max=14253, avg=12417.20, stdev=2878.76 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:35.718 | 30.00th=[12818], 40.00th=[12818], 50.00th=[14160], 60.00th=[14160], 00:16:35.718 | 70.00th=[14160], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.718 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.718 | 99.99th=[14295] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.00%, sys=0.34%, ctx=69, majf=0, minf=13825 00:16:35.718 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.718 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153330: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=3, BW=3453KiB/s (3536kB/s)(48.0MiB/14233msec) 00:16:35.718 slat (usec): min=531, max=2145.5k, avg=209246.28, stdev=606465.94 00:16:35.718 clat (msec): min=4188, max=14231, avg=12885.81, stdev=2726.65 00:16:35.718 lat (msec): min=6324, max=14232, avg=13095.06, stdev=2412.29 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[12818], 00:16:35.718 | 30.00th=[14026], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:16:35.718 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14295], 00:16:35.718 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.718 | 99.99th=[14295] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.00%, sys=0.21%, ctx=61, majf=0, minf=12289 00:16:35.718 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.718 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153331: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=4, BW=4207KiB/s (4308kB/s)(50.0MiB/12170msec) 00:16:35.718 slat (usec): min=429, max=2204.8k, avg=200044.20, stdev=599043.22 00:16:35.718 clat (msec): min=2167, max=12162, avg=8631.20, stdev=3574.60 00:16:35.718 lat (msec): min=4372, max=12169, avg=8831.24, stdev=3484.21 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 2165], 5.00th=[ 4396], 10.00th=[ 4396], 20.00th=[ 4396], 00:16:35.718 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[10805], 60.00th=[10805], 00:16:35.718 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.718 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.718 | 99.99th=[12147] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.02%, sys=0.21%, ctx=62, majf=0, minf=12801 00:16:35.718 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.718 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153332: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=4, BW=4638KiB/s (4750kB/s)(55.0MiB/12142msec) 00:16:35.718 slat (usec): min=466, max=2116.6k, avg=182001.14, stdev=559676.67 00:16:35.718 clat (msec): min=2131, max=12141, avg=8659.78, stdev=3535.49 00:16:35.718 lat (msec): min=2149, max=12141, avg=8841.78, stdev=3449.82 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:16:35.718 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10805], 00:16:35.718 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.718 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.718 | 99.99th=[12147] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.00%, sys=0.27%, ctx=67, majf=0, minf=14081 00:16:35.718 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.718 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job0: (groupid=0, jobs=1): err= 0: pid=1153333: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=19, BW=19.4MiB/s (20.4MB/s)(279MiB/14357msec) 00:16:35.718 slat (usec): min=58, max=2134.0k, avg=36251.52, stdev=228310.44 00:16:35.718 clat (msec): min=868, max=8723, avg=4026.63, stdev=2702.20 00:16:35.718 lat (msec): min=874, max=8731, avg=4062.88, stdev=2709.65 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 869], 5.00th=[ 885], 10.00th=[ 894], 20.00th=[ 911], 00:16:35.718 | 30.00th=[ 936], 40.00th=[ 4732], 50.00th=[ 4933], 60.00th=[ 5134], 00:16:35.718 | 70.00th=[ 5269], 80.00th=[ 5470], 90.00th=[ 8658], 95.00th=[ 8658], 00:16:35.718 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:35.718 | 99.99th=[ 8658] 00:16:35.718 bw ( KiB/s): min= 1924, max=149504, per=3.80%, avg=77793.00, stdev=68155.47, samples=4 00:16:35.718 iops : min= 1, max= 146, avg=75.75, stdev=66.88, samples=4 00:16:35.718 lat (msec) : 1000=35.84%, 2000=2.15%, >=2000=62.01% 00:16:35.718 cpu : usr=0.01%, sys=0.77%, ctx=291, majf=0, minf=32769 00:16:35.718 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=11.5%, >=64=77.4% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:35.718 issued rwts: total=279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job1: (groupid=0, jobs=1): err= 0: pid=1153336: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=5, BW=5187KiB/s (5311kB/s)(62.0MiB/12240msec) 00:16:35.718 slat (usec): min=538, max=2131.8k, avg=162870.03, stdev=537374.16 00:16:35.718 clat (msec): min=2141, max=12238, avg=10569.70, stdev=2944.39 00:16:35.718 lat (msec): min=4254, max=12239, avg=10732.57, stdev=2742.92 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 4329], 20.00th=[ 8557], 00:16:35.718 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:16:35.718 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:35.718 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:35.718 | 99.99th=[12281] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.00%, sys=0.53%, ctx=91, majf=0, minf=15873 00:16:35.718 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.718 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job1: (groupid=0, jobs=1): err= 0: pid=1153337: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=2, BW=2271KiB/s (2326kB/s)(27.0MiB/12174msec) 00:16:35.718 slat (usec): min=547, max=2158.5k, avg=370365.10, stdev=773933.50 00:16:35.718 clat (msec): min=2173, max=12171, avg=8476.18, stdev=3528.78 00:16:35.718 lat (msec): min=2174, max=12173, avg=8846.54, stdev=3362.72 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4396], 00:16:35.718 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.718 | 70.00th=[10805], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.718 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.718 | 99.99th=[12147] 00:16:35.718 lat (msec) : >=2000=100.00% 00:16:35.718 cpu : usr=0.00%, sys=0.14%, ctx=67, majf=0, minf=6913 00:16:35.718 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:16:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.718 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.718 job1: (groupid=0, jobs=1): err= 0: pid=1153338: Thu Apr 18 13:45:38 2024 00:16:35.718 read: IOPS=6, BW=6897KiB/s (7062kB/s)(82.0MiB/12175msec) 00:16:35.718 slat (usec): min=509, max=2135.8k, avg=122046.66, stdev=460531.87 00:16:35.718 clat (msec): min=2166, max=12173, avg=9170.91, stdev=3374.61 00:16:35.718 lat (msec): min=2175, max=12174, avg=9292.96, stdev=3298.26 00:16:35.718 clat percentiles (msec): 00:16:35.718 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 4329], 00:16:35.719 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10805], 60.00th=[12013], 00:16:35.719 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.719 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.719 | 99.99th=[12147] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.45%, ctx=88, majf=0, minf=20993 00:16:35.719 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.8%, 16=19.5%, 32=39.0%, >=64=23.2% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.719 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153339: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=30, BW=30.9MiB/s (32.4MB/s)(314MiB/10176msec) 00:16:35.719 slat (usec): min=59, max=2107.6k, avg=32165.33, stdev=211740.38 00:16:35.719 clat (msec): min=72, max=8652, avg=2658.04, stdev=3123.00 00:16:35.719 lat (msec): min=437, max=8658, avg=2690.20, stdev=3137.46 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 439], 5.00th=[ 535], 10.00th=[ 642], 20.00th=[ 860], 00:16:35.719 | 30.00th=[ 885], 40.00th=[ 902], 50.00th=[ 927], 60.00th=[ 1028], 00:16:35.719 | 70.00th=[ 1200], 80.00th=[ 7416], 90.00th=[ 8557], 95.00th=[ 8557], 00:16:35.719 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:35.719 | 99.99th=[ 8658] 00:16:35.719 bw ( KiB/s): min=24576, max=149504, per=4.66%, avg=95232.00, stdev=59556.55, samples=4 00:16:35.719 iops : min= 24, max= 146, avg=93.00, stdev=58.16, samples=4 00:16:35.719 lat (msec) : 100=0.32%, 500=2.87%, 750=11.46%, 1000=43.95%, 2000=16.24% 00:16:35.719 lat (msec) : >=2000=25.16% 00:16:35.719 cpu : usr=0.00%, sys=1.19%, ctx=322, majf=0, minf=32502 00:16:35.719 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=79.9% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:35.719 issued rwts: total=314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153340: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=4, BW=4753KiB/s (4867kB/s)(66.0MiB/14220msec) 00:16:35.719 slat (usec): min=452, max=2101.8k, avg=151622.84, stdev=507259.85 00:16:35.719 clat (msec): min=4211, max=14218, avg=10417.09, stdev=3438.33 00:16:35.719 lat (msec): min=4228, max=14219, avg=10568.72, stdev=3380.65 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 4212], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:16:35.719 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:16:35.719 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.719 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.719 | 99.99th=[14160] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.26%, ctx=59, majf=0, minf=16897 00:16:35.719 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.719 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153341: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=1, BW=1510KiB/s (1546kB/s)(21.0MiB/14239msec) 00:16:35.719 slat (msec): min=5, max=3475, avg=476.41, stdev=1005.42 00:16:35.719 clat (msec): min=4233, max=14232, avg=8998.46, stdev=4104.93 00:16:35.719 lat (msec): min=4252, max=14238, avg=9474.87, stdev=4104.81 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 4245], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 4329], 00:16:35.719 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:16:35.719 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.719 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.719 | 99.99th=[14295] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.10%, ctx=57, majf=0, minf=5377 00:16:35.719 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.719 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153342: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=2, BW=2443KiB/s (2501kB/s)(29.0MiB/12157msec) 00:16:35.719 slat (msec): min=7, max=2132, avg=345.36, stdev=733.39 00:16:35.719 clat (msec): min=2141, max=12136, avg=6466.69, stdev=3131.66 00:16:35.719 lat (msec): min=4076, max=12156, avg=6812.05, stdev=3189.34 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 2140], 5.00th=[ 4077], 10.00th=[ 4077], 20.00th=[ 4144], 00:16:35.719 | 30.00th=[ 4212], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:16:35.719 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[12147], 95.00th=[12147], 00:16:35.719 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.719 | 99.99th=[12147] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.16%, ctx=78, majf=0, minf=7425 00:16:35.719 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.719 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153343: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=27, BW=27.7MiB/s (29.0MB/s)(395MiB/14284msec) 00:16:35.719 slat (usec): min=55, max=2169.6k, avg=25437.96, stdev=210728.68 00:16:35.719 clat (msec): min=299, max=13167, avg=4476.98, stdev=5641.90 00:16:35.719 lat (msec): min=302, max=13169, avg=4502.41, stdev=5656.88 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 305], 5.00th=[ 317], 10.00th=[ 347], 20.00th=[ 414], 00:16:35.719 | 30.00th=[ 460], 40.00th=[ 464], 50.00th=[ 477], 60.00th=[ 523], 00:16:35.719 | 70.00th=[ 8658], 80.00th=[12953], 90.00th=[13087], 95.00th=[13087], 00:16:35.719 | 99.00th=[13087], 99.50th=[13221], 99.90th=[13221], 99.95th=[13221], 00:16:35.719 | 99.99th=[13221] 00:16:35.719 bw ( KiB/s): min= 2019, max=292864, per=4.47%, avg=91472.50, stdev=129625.85, samples=6 00:16:35.719 iops : min= 1, max= 286, avg=89.17, stdev=126.72, samples=6 00:16:35.719 lat (msec) : 500=57.97%, 750=5.32%, >=2000=36.71% 00:16:35.719 cpu : usr=0.00%, sys=0.76%, ctx=353, majf=0, minf=32769 00:16:35.719 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.1% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:35.719 issued rwts: total=395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153344: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=2, BW=2373KiB/s (2430kB/s)(33.0MiB/14240msec) 00:16:35.719 slat (usec): min=589, max=4195.7k, avg=303064.66, stdev=987736.66 00:16:35.719 clat (msec): min=4238, max=14239, avg=11060.40, stdev=3452.11 00:16:35.719 lat (msec): min=4256, max=14239, avg=11363.46, stdev=3268.62 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 4245], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 8490], 00:16:35.719 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[14160], 00:16:35.719 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.719 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.719 | 99.99th=[14295] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.13%, ctx=47, majf=0, minf=8449 00:16:35.719 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.719 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153345: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=2, BW=2714KiB/s (2779kB/s)(38.0MiB/14336msec) 00:16:35.719 slat (usec): min=1024, max=2162.4k, avg=265855.16, stdev=675317.69 00:16:35.719 clat (msec): min=4232, max=14332, avg=13186.68, stdev=2515.21 00:16:35.719 lat (msec): min=6383, max=14335, avg=13452.53, stdev=2030.43 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 4245], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[12953], 00:16:35.719 | 30.00th=[14295], 40.00th=[14295], 50.00th=[14295], 60.00th=[14295], 00:16:35.719 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.719 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.719 | 99.99th=[14295] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.31%, ctx=74, majf=0, minf=9729 00:16:35.719 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.719 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.719 job1: (groupid=0, jobs=1): err= 0: pid=1153346: Thu Apr 18 13:45:38 2024 00:16:35.719 read: IOPS=2, BW=2662KiB/s (2726kB/s)(37.0MiB/14231msec) 00:16:35.719 slat (usec): min=484, max=4204.8k, avg=270472.68, stdev=921825.77 00:16:35.719 clat (msec): min=4223, max=14230, avg=11041.38, stdev=3756.40 00:16:35.719 lat (msec): min=4240, max=14230, avg=11311.86, stdev=3609.23 00:16:35.719 clat percentiles (msec): 00:16:35.719 | 1.00th=[ 4212], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 8557], 00:16:35.719 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[14026], 60.00th=[14160], 00:16:35.719 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14295], 95.00th=[14295], 00:16:35.719 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.719 | 99.99th=[14295] 00:16:35.719 lat (msec) : >=2000=100.00% 00:16:35.719 cpu : usr=0.00%, sys=0.15%, ctx=42, majf=0, minf=9473 00:16:35.719 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:16:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.719 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.720 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job1: (groupid=0, jobs=1): err= 0: pid=1153347: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=7, BW=8149KiB/s (8344kB/s)(114MiB/14326msec) 00:16:35.720 slat (usec): min=484, max=2088.3k, avg=88029.47, stdev=385169.37 00:16:35.720 clat (msec): min=4289, max=14323, avg=11064.34, stdev=2366.52 00:16:35.720 lat (msec): min=6375, max=14325, avg=11152.37, stdev=2297.93 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 6342], 5.00th=[ 6409], 10.00th=[ 6409], 20.00th=[10402], 00:16:35.720 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:16:35.720 | 70.00th=[10671], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.720 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.720 | 99.99th=[14295] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.00%, sys=0.63%, ctx=100, majf=0, minf=29185 00:16:35.720 IO depths : 1=0.9%, 2=1.8%, 4=3.5%, 8=7.0%, 16=14.0%, 32=28.1%, >=64=44.7% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.720 issued rwts: total=114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job1: (groupid=0, jobs=1): err= 0: pid=1153348: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=5, BW=5565KiB/s (5699kB/s)(66.0MiB/12144msec) 00:16:35.720 slat (usec): min=508, max=2102.3k, avg=151582.51, stdev=514236.32 00:16:35.720 clat (msec): min=2138, max=12141, avg=8797.53, stdev=3407.61 00:16:35.720 lat (msec): min=2143, max=12142, avg=8949.11, stdev=3328.42 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:35.720 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.720 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.720 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.720 | 99.99th=[12147] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.00%, sys=0.31%, ctx=73, majf=0, minf=16897 00:16:35.720 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.720 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153349: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=87, BW=87.4MiB/s (91.6MB/s)(1068MiB/12224msec) 00:16:35.720 slat (usec): min=64, max=2182.2k, avg=9366.00, stdev=110840.72 00:16:35.720 clat (msec): min=200, max=8880, avg=1402.09, stdev=2612.84 00:16:35.720 lat (msec): min=201, max=8882, avg=1411.45, stdev=2621.69 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 215], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 239], 00:16:35.720 | 30.00th=[ 249], 40.00th=[ 342], 50.00th=[ 426], 60.00th=[ 439], 00:16:35.720 | 70.00th=[ 609], 80.00th=[ 776], 90.00th=[ 8658], 95.00th=[ 8792], 00:16:35.720 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:16:35.720 | 99.99th=[ 8926] 00:16:35.720 bw ( KiB/s): min= 4096, max=573440, per=9.42%, avg=192716.80, stdev=207315.99, samples=10 00:16:35.720 iops : min= 4, max= 560, avg=188.20, stdev=202.46, samples=10 00:16:35.720 lat (msec) : 250=30.99%, 500=33.61%, 750=12.08%, 1000=9.64%, >=2000=13.67% 00:16:35.720 cpu : usr=0.06%, sys=1.22%, ctx=951, majf=0, minf=32769 00:16:35.720 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.720 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153350: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=60, BW=60.4MiB/s (63.3MB/s)(860MiB/14241msec) 00:16:35.720 slat (usec): min=50, max=4204.8k, avg=11636.87, stdev=164637.77 00:16:35.720 clat (msec): min=199, max=14070, avg=1904.64, stdev=3730.84 00:16:35.720 lat (msec): min=201, max=14070, avg=1916.28, stdev=3746.31 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 201], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 218], 00:16:35.720 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 326], 60.00th=[ 401], 00:16:35.720 | 70.00th=[ 418], 80.00th=[ 426], 90.00th=[10805], 95.00th=[10939], 00:16:35.720 | 99.00th=[10939], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:35.720 | 99.99th=[14026] 00:16:35.720 bw ( KiB/s): min= 4096, max=561152, per=10.48%, avg=214454.86, stdev=238207.77, samples=7 00:16:35.720 iops : min= 4, max= 548, avg=209.43, stdev=232.62, samples=7 00:16:35.720 lat (msec) : 250=43.02%, 500=38.95%, 2000=1.98%, >=2000=16.05% 00:16:35.720 cpu : usr=0.07%, sys=0.93%, ctx=750, majf=0, minf=32769 00:16:35.720 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.720 issued rwts: total=860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153351: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=4, BW=4144KiB/s (4243kB/s)(49.0MiB/12109msec) 00:16:35.720 slat (usec): min=532, max=2098.9k, avg=204090.58, stdev=587185.37 00:16:35.720 clat (msec): min=2108, max=11972, avg=7262.53, stdev=3081.16 00:16:35.720 lat (msec): min=2113, max=12108, avg=7466.62, stdev=3063.79 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4212], 00:16:35.720 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:16:35.720 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[12013], 95.00th=[12013], 00:16:35.720 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:35.720 | 99.99th=[12013] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.00%, sys=0.26%, ctx=66, majf=0, minf=12545 00:16:35.720 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.720 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153352: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=1, BW=1439KiB/s (1473kB/s)(17.0MiB/12098msec) 00:16:35.720 slat (msec): min=7, max=2104, avg=588.92, stdev=916.70 00:16:35.720 clat (msec): min=2085, max=10713, avg=5773.76, stdev=3107.09 00:16:35.720 lat (msec): min=2112, max=12096, avg=6362.68, stdev=3306.67 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2140], 00:16:35.720 | 30.00th=[ 4212], 40.00th=[ 4245], 50.00th=[ 6409], 60.00th=[ 6409], 00:16:35.720 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10671], 00:16:35.720 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:35.720 | 99.99th=[10671] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.00%, sys=0.10%, ctx=52, majf=0, minf=4353 00:16:35.720 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.720 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153353: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=0, BW=1012KiB/s (1036kB/s)(14.0MiB/14170msec) 00:16:35.720 slat (msec): min=6, max=4219, avg=715.75, stdev=1276.57 00:16:35.720 clat (msec): min=4148, max=14162, avg=10099.28, stdev=3756.13 00:16:35.720 lat (msec): min=4190, max=14169, avg=10815.02, stdev=3479.53 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 4144], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 4245], 00:16:35.720 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:16:35.720 | 70.00th=[12684], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.720 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.720 | 99.99th=[14160] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.00%, sys=0.06%, ctx=52, majf=0, minf=3585 00:16:35.720 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153354: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=6, BW=6222KiB/s (6372kB/s)(74.0MiB/12178msec) 00:16:35.720 slat (usec): min=553, max=2102.7k, avg=135371.10, stdev=482270.21 00:16:35.720 clat (msec): min=2159, max=12176, avg=10081.54, stdev=2862.91 00:16:35.720 lat (msec): min=4187, max=12177, avg=10216.91, stdev=2716.27 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:16:35.720 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:16:35.720 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.720 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.720 | 99.99th=[12147] 00:16:35.720 lat (msec) : >=2000=100.00% 00:16:35.720 cpu : usr=0.02%, sys=0.61%, ctx=104, majf=0, minf=18945 00:16:35.720 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:16:35.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.720 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.720 job2: (groupid=0, jobs=1): err= 0: pid=1153355: Thu Apr 18 13:45:38 2024 00:16:35.720 read: IOPS=1, BW=2022KiB/s (2070kB/s)(28.0MiB/14183msec) 00:16:35.720 slat (usec): min=548, max=2182.9k, avg=357777.88, stdev=767999.67 00:16:35.720 clat (msec): min=4164, max=14180, avg=11610.77, stdev=3480.92 00:16:35.720 lat (msec): min=4190, max=14182, avg=11968.55, stdev=3189.90 00:16:35.720 clat percentiles (msec): 00:16:35.720 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 8490], 00:16:35.721 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14160], 60.00th=[14160], 00:16:35.721 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.721 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.721 | 99.99th=[14160] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.13%, ctx=53, majf=0, minf=7169 00:16:35.721 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.721 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153356: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=3, BW=3724KiB/s (3813kB/s)(52.0MiB/14300msec) 00:16:35.721 slat (usec): min=566, max=2135.5k, avg=192810.03, stdev=582232.56 00:16:35.721 clat (msec): min=4273, max=14298, avg=13175.77, stdev=2338.35 00:16:35.721 lat (msec): min=6339, max=14299, avg=13368.58, stdev=1975.04 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 4279], 5.00th=[ 8490], 10.00th=[ 8490], 20.00th=[12818], 00:16:35.721 | 30.00th=[14160], 40.00th=[14160], 50.00th=[14295], 60.00th=[14295], 00:16:35.721 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:16:35.721 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.721 | 99.99th=[14295] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.38%, ctx=93, majf=0, minf=13313 00:16:35.721 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.721 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153357: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=2, BW=2200KiB/s (2253kB/s)(26.0MiB/12101msec) 00:16:35.721 slat (msec): min=7, max=2112, avg=385.21, stdev=769.61 00:16:35.721 clat (msec): min=2085, max=12083, avg=7148.27, stdev=3461.01 00:16:35.721 lat (msec): min=2102, max=12100, avg=7533.48, stdev=3432.21 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 2089], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4212], 00:16:35.721 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8658], 00:16:35.721 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[12013], 95.00th=[12013], 00:16:35.721 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.721 | 99.99th=[12147] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.12%, ctx=63, majf=0, minf=6657 00:16:35.721 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.721 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153358: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=3, BW=3693KiB/s (3781kB/s)(44.0MiB/12202msec) 00:16:35.721 slat (usec): min=472, max=2155.1k, avg=227501.51, stdev=620572.98 00:16:35.721 clat (msec): min=2191, max=12200, avg=8807.08, stdev=3339.12 00:16:35.721 lat (msec): min=2203, max=12201, avg=9034.59, stdev=3216.65 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 6477], 00:16:35.721 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.721 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.721 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.721 | 99.99th=[12147] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.26%, ctx=55, majf=0, minf=11265 00:16:35.721 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.721 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153359: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=2, BW=2237KiB/s (2290kB/s)(31.0MiB/14193msec) 00:16:35.721 slat (usec): min=437, max=2182.9k, avg=323480.73, stdev=726884.22 00:16:35.721 clat (msec): min=4164, max=14191, avg=11838.91, stdev=3341.63 00:16:35.721 lat (msec): min=4210, max=14192, avg=12162.39, stdev=3046.27 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 6275], 20.00th=[ 8557], 00:16:35.721 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14160], 00:16:35.721 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.721 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.721 | 99.99th=[14160] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.14%, ctx=56, majf=0, minf=7937 00:16:35.721 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.721 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153360: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=17, BW=17.5MiB/s (18.3MB/s)(249MiB/14264msec) 00:16:35.721 slat (usec): min=100, max=4191.7k, avg=40155.87, stdev=322592.59 00:16:35.721 clat (msec): min=725, max=13460, avg=7063.30, stdev=5858.32 00:16:35.721 lat (msec): min=729, max=13460, avg=7103.45, stdev=5866.38 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 726], 5.00th=[ 735], 10.00th=[ 751], 20.00th=[ 785], 00:16:35.721 | 30.00th=[ 835], 40.00th=[ 885], 50.00th=[ 8490], 60.00th=[12953], 00:16:35.721 | 70.00th=[12953], 80.00th=[13221], 90.00th=[13355], 95.00th=[13355], 00:16:35.721 | 99.00th=[13489], 99.50th=[13489], 99.90th=[13489], 99.95th=[13489], 00:16:35.721 | 99.99th=[13489] 00:16:35.721 bw ( KiB/s): min= 4096, max=159744, per=2.44%, avg=49964.60, stdev=64728.86, samples=5 00:16:35.721 iops : min= 4, max= 156, avg=48.60, stdev=63.34, samples=5 00:16:35.721 lat (msec) : 750=10.44%, 1000=32.13%, >=2000=57.43% 00:16:35.721 cpu : usr=0.01%, sys=0.82%, ctx=211, majf=0, minf=32769 00:16:35.721 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:35.721 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job2: (groupid=0, jobs=1): err= 0: pid=1153361: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=6, BW=6238KiB/s (6388kB/s)(74.0MiB/12147msec) 00:16:35.721 slat (usec): min=477, max=2104.7k, avg=135155.44, stdev=479993.13 00:16:35.721 clat (msec): min=2144, max=12145, avg=8759.85, stdev=3356.99 00:16:35.721 lat (msec): min=2153, max=12146, avg=8895.01, stdev=3287.63 00:16:35.721 clat percentiles (msec): 00:16:35.721 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 6409], 00:16:35.721 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.721 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.721 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.721 | 99.99th=[12147] 00:16:35.721 lat (msec) : >=2000=100.00% 00:16:35.721 cpu : usr=0.00%, sys=0.33%, ctx=83, majf=0, minf=18945 00:16:35.721 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:16:35.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.721 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.721 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.721 job3: (groupid=0, jobs=1): err= 0: pid=1153362: Thu Apr 18 13:45:38 2024 00:16:35.721 read: IOPS=14, BW=14.9MiB/s (15.6MB/s)(213MiB/14292msec) 00:16:35.721 slat (usec): min=53, max=2108.3k, avg=47081.59, stdev=285384.56 00:16:35.721 clat (msec): min=155, max=10721, avg=7125.22, stdev=4041.73 00:16:35.721 lat (msec): min=156, max=12788, avg=7172.30, stdev=4046.20 00:16:35.721 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 157], 5.00th=[ 182], 10.00th=[ 305], 20.00th=[ 3708], 00:16:35.722 | 30.00th=[ 3775], 40.00th=[ 4279], 50.00th=[10402], 60.00th=[10402], 00:16:35.722 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:35.722 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10671], 99.95th=[10671], 00:16:35.722 | 99.99th=[10671] 00:16:35.722 bw ( KiB/s): min= 1932, max=163840, per=2.15%, avg=44003.00, stdev=79909.84, samples=4 00:16:35.722 iops : min= 1, max= 160, avg=42.75, stdev=78.19, samples=4 00:16:35.722 lat (msec) : 250=6.10%, 500=7.04%, >=2000=86.85% 00:16:35.722 cpu : usr=0.00%, sys=0.72%, ctx=239, majf=0, minf=32769 00:16:35.722 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.0%, >=64=70.4% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:35.722 issued rwts: total=213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153363: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=157, BW=157MiB/s (165MB/s)(1928MiB/12271msec) 00:16:35.722 slat (usec): min=51, max=2015.2k, avg=5182.04, stdev=70619.67 00:16:35.722 clat (msec): min=130, max=4569, avg=638.40, stdev=1189.58 00:16:35.722 lat (msec): min=131, max=4569, avg=643.58, stdev=1195.11 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 132], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 134], 00:16:35.722 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:16:35.722 | 70.00th=[ 558], 80.00th=[ 651], 90.00th=[ 835], 95.00th=[ 4463], 00:16:35.722 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:16:35.722 | 99.99th=[ 4597] 00:16:35.722 bw ( KiB/s): min= 2048, max=968704, per=20.03%, avg=409827.56, stdev=360605.70, samples=9 00:16:35.722 iops : min= 2, max= 946, avg=400.22, stdev=352.15, samples=9 00:16:35.722 lat (msec) : 250=65.46%, 500=3.16%, 750=18.57%, 1000=3.84%, >=2000=8.97% 00:16:35.722 cpu : usr=0.09%, sys=2.15%, ctx=1849, majf=0, minf=32769 00:16:35.722 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.722 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153364: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=4, BW=4817KiB/s (4932kB/s)(57.0MiB/12118msec) 00:16:35.722 slat (usec): min=546, max=2035.1k, avg=176337.10, stdev=537305.00 00:16:35.722 clat (msec): min=2066, max=12110, avg=6615.38, stdev=3079.81 00:16:35.722 lat (msec): min=2141, max=12117, avg=6791.72, stdev=3102.37 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 2072], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4245], 00:16:35.722 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6477], 00:16:35.722 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10805], 95.00th=[12147], 00:16:35.722 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.722 | 99.99th=[12147] 00:16:35.722 lat (msec) : >=2000=100.00% 00:16:35.722 cpu : usr=0.01%, sys=0.28%, ctx=84, majf=0, minf=14593 00:16:35.722 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.722 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153365: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=54, BW=54.8MiB/s (57.4MB/s)(556MiB/10154msec) 00:16:35.722 slat (usec): min=77, max=2112.2k, avg=17979.12, stdev=134325.27 00:16:35.722 clat (msec): min=151, max=4881, avg=1681.06, stdev=1320.38 00:16:35.722 lat (msec): min=155, max=4889, avg=1699.04, stdev=1328.42 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 165], 5.00th=[ 275], 10.00th=[ 422], 20.00th=[ 785], 00:16:35.722 | 30.00th=[ 877], 40.00th=[ 894], 50.00th=[ 927], 60.00th=[ 1284], 00:16:35.722 | 70.00th=[ 3104], 80.00th=[ 3171], 90.00th=[ 3239], 95.00th=[ 4732], 00:16:35.722 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:35.722 | 99.99th=[ 4866] 00:16:35.722 bw ( KiB/s): min=32768, max=178176, per=5.37%, avg=109824.00, stdev=50699.45, samples=8 00:16:35.722 iops : min= 32, max= 174, avg=107.25, stdev=49.51, samples=8 00:16:35.722 lat (msec) : 250=3.96%, 500=8.27%, 750=6.29%, 1000=37.23%, 2000=13.31% 00:16:35.722 lat (msec) : >=2000=30.94% 00:16:35.722 cpu : usr=0.06%, sys=1.23%, ctx=729, majf=0, minf=32769 00:16:35.722 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.7% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:35.722 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153366: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=1, BW=1297KiB/s (1328kB/s)(18.0MiB/14209msec) 00:16:35.722 slat (usec): min=528, max=2104.6k, avg=555632.67, stdev=899048.16 00:16:35.722 clat (msec): min=4207, max=14208, avg=9264.86, stdev=3955.32 00:16:35.722 lat (msec): min=4241, max=14208, avg=9820.50, stdev=3905.22 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 4212], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:16:35.722 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:16:35.722 | 70.00th=[12818], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.722 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.722 | 99.99th=[14160] 00:16:35.722 lat (msec) : >=2000=100.00% 00:16:35.722 cpu : usr=0.00%, sys=0.08%, ctx=53, majf=0, minf=4609 00:16:35.722 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.722 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153367: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=27, BW=27.3MiB/s (28.6MB/s)(332MiB/12173msec) 00:16:35.722 slat (usec): min=67, max=2054.4k, avg=30476.89, stdev=222797.56 00:16:35.722 clat (msec): min=399, max=12138, avg=4531.58, stdev=4602.82 00:16:35.722 lat (msec): min=401, max=12143, avg=4562.06, stdev=4614.58 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 405], 5.00th=[ 405], 10.00th=[ 409], 20.00th=[ 414], 00:16:35.722 | 30.00th=[ 435], 40.00th=[ 477], 50.00th=[ 2165], 60.00th=[ 4933], 00:16:35.722 | 70.00th=[ 8658], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:16:35.722 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.722 | 99.99th=[12147] 00:16:35.722 bw ( KiB/s): min= 2027, max=223232, per=2.93%, avg=59974.14, stdev=78592.73, samples=7 00:16:35.722 iops : min= 1, max= 218, avg=58.43, stdev=76.87, samples=7 00:16:35.722 lat (msec) : 500=41.27%, 750=5.72%, 1000=0.60%, >=2000=52.41% 00:16:35.722 cpu : usr=0.00%, sys=0.83%, ctx=287, majf=0, minf=32769 00:16:35.722 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.0% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:35.722 issued rwts: total=332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153368: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=6, BW=6386KiB/s (6539kB/s)(89.0MiB/14271msec) 00:16:35.722 slat (usec): min=532, max=4172.8k, avg=112355.76, stdev=545322.17 00:16:35.722 clat (msec): min=4270, max=14267, avg=11844.40, stdev=2069.62 00:16:35.722 lat (msec): min=4271, max=14270, avg=11956.76, stdev=1919.78 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 4279], 5.00th=[10537], 10.00th=[10537], 20.00th=[10537], 00:16:35.722 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10537], 60.00th=[12818], 00:16:35.722 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14295], 00:16:35.722 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:16:35.722 | 99.99th=[14295] 00:16:35.722 lat (msec) : >=2000=100.00% 00:16:35.722 cpu : usr=0.00%, sys=0.44%, ctx=130, majf=0, minf=22785 00:16:35.722 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.722 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153369: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=1, BW=1299KiB/s (1330kB/s)(18.0MiB/14186msec) 00:16:35.722 slat (msec): min=7, max=2087, avg=555.98, stdev=880.18 00:16:35.722 clat (msec): min=4177, max=14070, avg=8652.23, stdev=3574.67 00:16:35.722 lat (msec): min=4194, max=14185, avg=9208.21, stdev=3615.82 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:16:35.722 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:16:35.722 | 70.00th=[10671], 80.00th=[12818], 90.00th=[14026], 95.00th=[14026], 00:16:35.722 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:35.722 | 99.99th=[14026] 00:16:35.722 lat (msec) : >=2000=100.00% 00:16:35.722 cpu : usr=0.00%, sys=0.07%, ctx=53, majf=0, minf=4609 00:16:35.722 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:16:35.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.722 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:35.722 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.722 job3: (groupid=0, jobs=1): err= 0: pid=1153370: Thu Apr 18 13:45:38 2024 00:16:35.722 read: IOPS=66, BW=66.0MiB/s (69.3MB/s)(941MiB/14248msec) 00:16:35.722 slat (usec): min=49, max=2093.7k, avg=10744.85, stdev=129108.21 00:16:35.722 clat (msec): min=131, max=14085, avg=1725.51, stdev=3781.02 00:16:35.722 lat (msec): min=132, max=14149, avg=1736.26, stdev=3798.88 00:16:35.722 clat percentiles (msec): 00:16:35.722 | 1.00th=[ 132], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 133], 00:16:35.722 | 30.00th=[ 134], 40.00th=[ 134], 50.00th=[ 136], 60.00th=[ 136], 00:16:35.723 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[11745], 95.00th=[11879], 00:16:35.723 | 99.00th=[11879], 99.50th=[11879], 99.90th=[14026], 99.95th=[14026], 00:16:35.723 | 99.99th=[14026] 00:16:35.723 bw ( KiB/s): min= 1946, max=919552, per=11.64%, avg=238138.57, stdev=393474.51, samples=7 00:16:35.723 iops : min= 1, max= 898, avg=232.43, stdev=384.34, samples=7 00:16:35.723 lat (msec) : 250=82.78%, 2000=1.17%, >=2000=16.05% 00:16:35.723 cpu : usr=0.04%, sys=0.91%, ctx=876, majf=0, minf=32769 00:16:35.723 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.723 issued rwts: total=941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job3: (groupid=0, jobs=1): err= 0: pid=1153371: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=15, BW=15.5MiB/s (16.2MB/s)(221MiB/14288msec) 00:16:35.723 slat (usec): min=76, max=2163.2k, avg=45341.42, stdev=279523.37 00:16:35.723 clat (msec): min=740, max=13522, avg=7894.60, stdev=5903.76 00:16:35.723 lat (msec): min=743, max=13530, avg=7939.95, stdev=5906.83 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 743], 5.00th=[ 751], 10.00th=[ 760], 20.00th=[ 776], 00:16:35.723 | 30.00th=[ 818], 40.00th=[ 5067], 50.00th=[12818], 60.00th=[12953], 00:16:35.723 | 70.00th=[13087], 80.00th=[13221], 90.00th=[13355], 95.00th=[13489], 00:16:35.723 | 99.00th=[13489], 99.50th=[13489], 99.90th=[13489], 99.95th=[13489], 00:16:35.723 | 99.99th=[13489] 00:16:35.723 bw ( KiB/s): min= 1961, max=161792, per=1.57%, avg=32070.83, stdev=63656.77, samples=6 00:16:35.723 iops : min= 1, max= 158, avg=31.17, stdev=62.25, samples=6 00:16:35.723 lat (msec) : 750=4.07%, 1000=34.39%, >=2000=61.54% 00:16:35.723 cpu : usr=0.01%, sys=0.73%, ctx=232, majf=0, minf=32769 00:16:35.723 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:35.723 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job3: (groupid=0, jobs=1): err= 0: pid=1153372: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=21, BW=21.4MiB/s (22.4MB/s)(261MiB/12207msec) 00:16:35.723 slat (usec): min=52, max=2046.0k, avg=38343.79, stdev=226148.54 00:16:35.723 clat (msec): min=1231, max=8684, avg=4569.84, stdev=2370.66 00:16:35.723 lat (msec): min=1239, max=10711, avg=4608.19, stdev=2383.68 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 1234], 5.00th=[ 1250], 10.00th=[ 1250], 20.00th=[ 1334], 00:16:35.723 | 30.00th=[ 3272], 40.00th=[ 4329], 50.00th=[ 4732], 60.00th=[ 6477], 00:16:35.723 | 70.00th=[ 6745], 80.00th=[ 7013], 90.00th=[ 7282], 95.00th=[ 7416], 00:16:35.723 | 99.00th=[ 7550], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:35.723 | 99.99th=[ 8658] 00:16:35.723 bw ( KiB/s): min= 8192, max=102400, per=2.68%, avg=54872.40, stdev=38580.09, samples=5 00:16:35.723 iops : min= 8, max= 100, avg=53.40, stdev=37.80, samples=5 00:16:35.723 lat (msec) : 2000=26.44%, >=2000=73.56% 00:16:35.723 cpu : usr=0.02%, sys=0.70%, ctx=449, majf=0, minf=32769 00:16:35.723 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.3%, >=64=75.9% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:35.723 issued rwts: total=261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job3: (groupid=0, jobs=1): err= 0: pid=1153373: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=136, BW=137MiB/s (143MB/s)(1954MiB/14296msec) 00:16:35.723 slat (usec): min=53, max=2144.2k, avg=5126.79, stdev=80664.79 00:16:35.723 clat (msec): min=124, max=10590, avg=912.34, stdev=1856.40 00:16:35.723 lat (msec): min=126, max=10666, avg=917.47, stdev=1861.39 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 130], 5.00th=[ 132], 10.00th=[ 132], 20.00th=[ 133], 00:16:35.723 | 30.00th=[ 134], 40.00th=[ 134], 50.00th=[ 136], 60.00th=[ 138], 00:16:35.723 | 70.00th=[ 334], 80.00th=[ 464], 90.00th=[ 4799], 95.00th=[ 6477], 00:16:35.723 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[10537], 00:16:35.723 | 99.99th=[10537] 00:16:35.723 bw ( KiB/s): min= 1932, max=972800, per=16.63%, avg=340143.64, stdev=378894.18, samples=11 00:16:35.723 iops : min= 1, max= 950, avg=332.09, stdev=370.09, samples=11 00:16:35.723 lat (msec) : 250=66.63%, 500=17.20%, 750=2.92%, >=2000=13.25% 00:16:35.723 cpu : usr=0.06%, sys=1.88%, ctx=1872, majf=0, minf=32769 00:16:35.723 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.723 issued rwts: total=1954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job3: (groupid=0, jobs=1): err= 0: pid=1153374: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=3, BW=4008KiB/s (4104kB/s)(48.0MiB/12265msec) 00:16:35.723 slat (usec): min=532, max=2086.5k, avg=208478.78, stdev=597472.05 00:16:35.723 clat (msec): min=2257, max=12263, avg=11033.61, stdev=2514.43 00:16:35.723 lat (msec): min=4343, max=12264, avg=11242.09, stdev=2161.33 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 2265], 5.00th=[ 4396], 10.00th=[ 6477], 20.00th=[10671], 00:16:35.723 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12281], 00:16:35.723 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:35.723 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:35.723 | 99.99th=[12281] 00:16:35.723 lat (msec) : >=2000=100.00% 00:16:35.723 cpu : usr=0.00%, sys=0.33%, ctx=76, majf=0, minf=12289 00:16:35.723 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.723 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job4: (groupid=0, jobs=1): err= 0: pid=1153375: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=13, BW=14.0MiB/s (14.7MB/s)(198MiB/14167msec) 00:16:35.723 slat (usec): min=539, max=2048.5k, avg=50542.87, stdev=270312.02 00:16:35.723 clat (msec): min=849, max=12864, avg=7930.66, stdev=4529.66 00:16:35.723 lat (msec): min=852, max=14166, avg=7981.20, stdev=4536.77 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 852], 5.00th=[ 860], 10.00th=[ 894], 20.00th=[ 1838], 00:16:35.723 | 30.00th=[ 4144], 40.00th=[ 8087], 50.00th=[10671], 60.00th=[11745], 00:16:35.723 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12281], 00:16:35.723 | 99.00th=[12281], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:16:35.723 | 99.99th=[12818] 00:16:35.723 bw ( KiB/s): min=12263, max=65536, per=1.18%, avg=24230.50, stdev=20824.76, samples=6 00:16:35.723 iops : min= 11, max= 64, avg=23.50, stdev=20.45, samples=6 00:16:35.723 lat (msec) : 1000=16.16%, 2000=7.58%, >=2000=76.26% 00:16:35.723 cpu : usr=0.01%, sys=0.56%, ctx=369, majf=0, minf=32769 00:16:35.723 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.1%, 32=16.2%, >=64=68.2% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:16:35.723 issued rwts: total=198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job4: (groupid=0, jobs=1): err= 0: pid=1153376: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=7, BW=7306KiB/s (7481kB/s)(87.0MiB/12194msec) 00:16:35.723 slat (usec): min=488, max=2020.8k, avg=115077.02, stdev=436616.63 00:16:35.723 clat (msec): min=2181, max=12190, avg=8794.57, stdev=3130.15 00:16:35.723 lat (msec): min=2197, max=12193, avg=8909.65, stdev=3067.60 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 2198], 5.00th=[ 2232], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:35.723 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.723 | 70.00th=[10805], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:16:35.723 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.723 | 99.99th=[12147] 00:16:35.723 lat (msec) : >=2000=100.00% 00:16:35.723 cpu : usr=0.00%, sys=0.53%, ctx=91, majf=0, minf=22273 00:16:35.723 IO depths : 1=1.1%, 2=2.3%, 4=4.6%, 8=9.2%, 16=18.4%, 32=36.8%, >=64=27.6% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.723 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job4: (groupid=0, jobs=1): err= 0: pid=1153377: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=7, BW=7570KiB/s (7752kB/s)(90.0MiB/12174msec) 00:16:35.723 slat (usec): min=524, max=2031.8k, avg=111140.51, stdev=430015.11 00:16:35.723 clat (msec): min=2169, max=12170, avg=8822.81, stdev=3500.58 00:16:35.723 lat (msec): min=2182, max=12172, avg=8933.95, stdev=3445.33 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 2165], 5.00th=[ 2198], 10.00th=[ 4245], 20.00th=[ 4329], 00:16:35.723 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10805], 00:16:35.723 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.723 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.723 | 99.99th=[12147] 00:16:35.723 lat (msec) : >=2000=100.00% 00:16:35.723 cpu : usr=0.00%, sys=0.45%, ctx=97, majf=0, minf=23041 00:16:35.723 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.9%, 16=17.8%, 32=35.6%, >=64=30.0% 00:16:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.723 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.723 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.723 job4: (groupid=0, jobs=1): err= 0: pid=1153378: Thu Apr 18 13:45:38 2024 00:16:35.723 read: IOPS=18, BW=18.0MiB/s (18.9MB/s)(219MiB/12163msec) 00:16:35.723 slat (usec): min=416, max=2010.9k, avg=45691.73, stdev=245565.90 00:16:35.723 clat (msec): min=1749, max=8981, avg=3877.56, stdev=2050.94 00:16:35.723 lat (msec): min=1757, max=8987, avg=3923.25, stdev=2073.50 00:16:35.723 clat percentiles (msec): 00:16:35.723 | 1.00th=[ 1754], 5.00th=[ 1804], 10.00th=[ 1921], 20.00th=[ 2299], 00:16:35.723 | 30.00th=[ 2601], 40.00th=[ 2903], 50.00th=[ 3272], 60.00th=[ 3574], 00:16:35.723 | 70.00th=[ 3842], 80.00th=[ 5738], 90.00th=[ 7684], 95.00th=[ 8926], 00:16:35.723 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:16:35.723 | 99.99th=[ 8926] 00:16:35.724 bw ( KiB/s): min=32768, max=86016, per=3.07%, avg=62805.33, stdev=27272.51, samples=3 00:16:35.724 iops : min= 32, max= 84, avg=61.33, stdev=26.63, samples=3 00:16:35.724 lat (msec) : 2000=12.33%, >=2000=87.67% 00:16:35.724 cpu : usr=0.01%, sys=0.65%, ctx=371, majf=0, minf=32769 00:16:35.724 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.6%, >=64=71.2% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:35.724 issued rwts: total=219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153379: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=6, BW=6484KiB/s (6640kB/s)(77.0MiB/12160msec) 00:16:35.724 slat (usec): min=493, max=2030.6k, avg=130013.65, stdev=465271.57 00:16:35.724 clat (msec): min=2148, max=12155, avg=7870.73, stdev=3734.54 00:16:35.724 lat (msec): min=2165, max=12159, avg=8000.75, stdev=3706.87 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 2165], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4329], 00:16:35.724 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.724 | 70.00th=[10805], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.724 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.724 | 99.99th=[12147] 00:16:35.724 lat (msec) : >=2000=100.00% 00:16:35.724 cpu : usr=0.00%, sys=0.37%, ctx=88, majf=0, minf=19713 00:16:35.724 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.724 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153380: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=68, BW=68.7MiB/s (72.0MB/s)(697MiB/10147msec) 00:16:35.724 slat (usec): min=66, max=2107.8k, avg=14357.78, stdev=118867.21 00:16:35.724 clat (msec): min=133, max=4435, avg=1275.92, stdev=1121.96 00:16:35.724 lat (msec): min=150, max=4448, avg=1290.28, stdev=1129.88 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 169], 5.00th=[ 309], 10.00th=[ 518], 20.00th=[ 592], 00:16:35.724 | 30.00th=[ 625], 40.00th=[ 701], 50.00th=[ 793], 60.00th=[ 869], 00:16:35.724 | 70.00th=[ 902], 80.00th=[ 2869], 90.00th=[ 3171], 95.00th=[ 3239], 00:16:35.724 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:16:35.724 | 99.99th=[ 4463] 00:16:35.724 bw ( KiB/s): min=24576, max=219136, per=6.33%, avg=129467.67, stdev=66864.15, samples=9 00:16:35.724 iops : min= 24, max= 214, avg=126.33, stdev=65.43, samples=9 00:16:35.724 lat (msec) : 250=3.30%, 500=6.46%, 750=34.72%, 1000=30.27%, 2000=2.73% 00:16:35.724 lat (msec) : >=2000=22.53% 00:16:35.724 cpu : usr=0.05%, sys=1.68%, ctx=705, majf=0, minf=32769 00:16:35.724 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:35.724 issued rwts: total=697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153381: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(575MiB/12114msec) 00:16:35.724 slat (usec): min=65, max=2089.1k, avg=20925.57, stdev=154384.86 00:16:35.724 clat (msec): min=76, max=4274, avg=1864.30, stdev=1233.17 00:16:35.724 lat (msec): min=575, max=4287, avg=1885.22, stdev=1238.32 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 575], 5.00th=[ 584], 10.00th=[ 592], 20.00th=[ 617], 00:16:35.724 | 30.00th=[ 684], 40.00th=[ 785], 50.00th=[ 1703], 60.00th=[ 2500], 00:16:35.724 | 70.00th=[ 2937], 80.00th=[ 3239], 90.00th=[ 3540], 95.00th=[ 3708], 00:16:35.724 | 99.00th=[ 3809], 99.50th=[ 4212], 99.90th=[ 4279], 99.95th=[ 4279], 00:16:35.724 | 99.99th=[ 4279] 00:16:35.724 bw ( KiB/s): min=36864, max=221184, per=4.97%, avg=101717.33, stdev=67304.15, samples=9 00:16:35.724 iops : min= 36, max= 216, avg=99.33, stdev=65.73, samples=9 00:16:35.724 lat (msec) : 100=0.17%, 750=36.52%, 1000=10.96%, 2000=6.61%, >=2000=45.74% 00:16:35.724 cpu : usr=0.05%, sys=0.84%, ctx=707, majf=0, minf=32769 00:16:35.724 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:35.724 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153382: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=5, BW=5123KiB/s (5246kB/s)(71.0MiB/14191msec) 00:16:35.724 slat (usec): min=490, max=2058.1k, avg=141043.26, stdev=486854.55 00:16:35.724 clat (msec): min=4176, max=14188, avg=9609.66, stdev=3609.35 00:16:35.724 lat (msec): min=4194, max=14190, avg=9750.70, stdev=3589.60 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 4178], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:16:35.724 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:16:35.724 | 70.00th=[12818], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:16:35.724 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:35.724 | 99.99th=[14160] 00:16:35.724 lat (msec) : >=2000=100.00% 00:16:35.724 cpu : usr=0.00%, sys=0.30%, ctx=80, majf=0, minf=18177 00:16:35.724 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.724 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153383: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=17, BW=17.8MiB/s (18.7MB/s)(253MiB/14220msec) 00:16:35.724 slat (usec): min=63, max=2120.5k, avg=39563.93, stdev=255834.97 00:16:35.724 clat (msec): min=728, max=13332, avg=6906.58, stdev=5342.51 00:16:35.724 lat (msec): min=729, max=13334, avg=6946.15, stdev=5352.09 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 726], 5.00th=[ 743], 10.00th=[ 768], 20.00th=[ 793], 00:16:35.724 | 30.00th=[ 852], 40.00th=[ 4245], 50.00th=[ 7013], 60.00th=[ 9194], 00:16:35.724 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13221], 00:16:35.724 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:16:35.724 | 99.99th=[13355] 00:16:35.724 bw ( KiB/s): min=16384, max=135168, per=2.10%, avg=43008.00, stdev=46557.67, samples=6 00:16:35.724 iops : min= 16, max= 132, avg=42.00, stdev=45.47, samples=6 00:16:35.724 lat (msec) : 750=8.70%, 1000=26.48%, >=2000=64.82% 00:16:35.724 cpu : usr=0.01%, sys=0.63%, ctx=222, majf=0, minf=32769 00:16:35.724 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.6%, >=64=75.1% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:35.724 issued rwts: total=253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153384: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=17, BW=17.1MiB/s (18.0MB/s)(243MiB/14171msec) 00:16:35.724 slat (usec): min=63, max=2082.7k, avg=41197.14, stdev=259628.19 00:16:35.724 clat (msec): min=734, max=13385, avg=7171.51, stdev=5479.34 00:16:35.724 lat (msec): min=736, max=13385, avg=7212.71, stdev=5487.15 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 735], 5.00th=[ 743], 10.00th=[ 760], 20.00th=[ 793], 00:16:35.724 | 30.00th=[ 844], 40.00th=[ 2903], 50.00th=[ 7148], 60.00th=[10805], 00:16:35.724 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13355], 00:16:35.724 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:16:35.724 | 99.99th=[13355] 00:16:35.724 bw ( KiB/s): min= 1542, max=108544, per=1.45%, avg=29630.63, stdev=36871.30, samples=8 00:16:35.724 iops : min= 1, max= 106, avg=28.75, stdev=36.15, samples=8 00:16:35.724 lat (msec) : 750=6.58%, 1000=27.16%, >=2000=66.26% 00:16:35.724 cpu : usr=0.01%, sys=0.55%, ctx=202, majf=0, minf=32769 00:16:35.724 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.6%, 32=13.2%, >=64=74.1% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:35.724 issued rwts: total=243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153385: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=5, BW=5479KiB/s (5610kB/s)(65.0MiB/12149msec) 00:16:35.724 slat (usec): min=501, max=2195.5k, avg=153878.17, stdev=513036.78 00:16:35.724 clat (msec): min=2145, max=12147, avg=7789.30, stdev=4103.15 00:16:35.724 lat (msec): min=2157, max=12148, avg=7943.17, stdev=4075.67 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:16:35.724 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 8658], 60.00th=[10805], 00:16:35.724 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:35.724 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:35.724 | 99.99th=[12147] 00:16:35.724 lat (msec) : >=2000=100.00% 00:16:35.724 cpu : usr=0.00%, sys=0.30%, ctx=86, majf=0, minf=16641 00:16:35.724 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:16:35.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.724 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.724 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.724 job4: (groupid=0, jobs=1): err= 0: pid=1153386: Thu Apr 18 13:45:38 2024 00:16:35.724 read: IOPS=130, BW=130MiB/s (137MB/s)(1848MiB/14164msec) 00:16:35.724 slat (usec): min=55, max=2028.9k, avg=5410.19, stdev=70541.96 00:16:35.724 clat (msec): min=121, max=10727, avg=861.73, stdev=1991.04 00:16:35.724 lat (msec): min=122, max=11644, avg=867.14, stdev=2001.02 00:16:35.724 clat percentiles (msec): 00:16:35.724 | 1.00th=[ 136], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 142], 00:16:35.724 | 30.00th=[ 144], 40.00th=[ 144], 50.00th=[ 144], 60.00th=[ 144], 00:16:35.724 | 70.00th=[ 146], 80.00th=[ 430], 90.00th=[ 1955], 95.00th=[ 6275], 00:16:35.725 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[10671], 99.95th=[10671], 00:16:35.725 | 99.99th=[10671] 00:16:35.725 bw ( KiB/s): min= 2048, max=897024, per=15.66%, avg=320415.91, stdev=389464.51, samples=11 00:16:35.725 iops : min= 2, max= 876, avg=312.82, stdev=380.41, samples=11 00:16:35.725 lat (msec) : 250=78.90%, 500=1.62%, 750=1.08%, 1000=2.06%, 2000=6.60% 00:16:35.725 lat (msec) : >=2000=9.74% 00:16:35.725 cpu : usr=0.05%, sys=1.59%, ctx=1860, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.725 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job4: (groupid=0, jobs=1): err= 0: pid=1153387: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=52, BW=52.4MiB/s (54.9MB/s)(747MiB/14266msec) 00:16:35.725 slat (usec): min=50, max=2027.7k, avg=13398.04, stdev=135407.44 00:16:35.725 clat (msec): min=287, max=8178, avg=1420.79, stdev=2087.68 00:16:35.725 lat (msec): min=289, max=8180, avg=1434.19, stdev=2102.51 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 288], 5.00th=[ 292], 10.00th=[ 292], 20.00th=[ 292], 00:16:35.725 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 321], 00:16:35.725 | 70.00th=[ 330], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 4866], 00:16:35.725 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:16:35.725 | 99.99th=[ 8154] 00:16:35.725 bw ( KiB/s): min=112640, max=446464, per=15.52%, avg=317440.00, stdev=156266.56, samples=4 00:16:35.725 iops : min= 110, max= 436, avg=310.00, stdev=152.60, samples=4 00:16:35.725 lat (msec) : 500=75.64%, 750=0.27%, >=2000=24.10% 00:16:35.725 cpu : usr=0.04%, sys=1.00%, ctx=712, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:35.725 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153388: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=96, BW=96.4MiB/s (101MB/s)(976MiB/10125msec) 00:16:35.725 slat (usec): min=64, max=2053.7k, avg=10253.48, stdev=121476.53 00:16:35.725 clat (msec): min=111, max=8750, avg=1278.32, stdev=2354.11 00:16:35.725 lat (msec): min=156, max=8989, avg=1288.57, stdev=2364.25 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 165], 00:16:35.725 | 30.00th=[ 167], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 284], 00:16:35.725 | 70.00th=[ 502], 80.00th=[ 584], 90.00th=[ 6409], 95.00th=[ 8020], 00:16:35.725 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:35.725 | 99.99th=[ 8792] 00:16:35.725 bw ( KiB/s): min= 2048, max=741376, per=8.50%, avg=173875.20, stdev=253580.02, samples=10 00:16:35.725 iops : min= 2, max= 724, avg=169.80, stdev=247.64, samples=10 00:16:35.725 lat (msec) : 250=39.65%, 500=30.12%, 750=11.27%, 2000=2.97%, >=2000=15.98% 00:16:35.725 cpu : usr=0.05%, sys=1.42%, ctx=1002, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.725 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153389: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(216MiB/10142msec) 00:16:35.725 slat (usec): min=72, max=2150.6k, avg=46307.94, stdev=279254.94 00:16:35.725 clat (msec): min=137, max=8095, avg=3393.83, stdev=2533.81 00:16:35.725 lat (msec): min=141, max=8097, avg=3440.14, stdev=2537.48 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 146], 5.00th=[ 426], 10.00th=[ 1888], 20.00th=[ 1955], 00:16:35.725 | 30.00th=[ 2005], 40.00th=[ 2039], 50.00th=[ 2089], 60.00th=[ 2140], 00:16:35.725 | 70.00th=[ 2534], 80.00th=[ 6745], 90.00th=[ 8087], 95.00th=[ 8087], 00:16:35.725 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:16:35.725 | 99.99th=[ 8087] 00:16:35.725 bw ( KiB/s): min=14336, max=167936, per=4.46%, avg=91136.00, stdev=108611.60, samples=2 00:16:35.725 iops : min= 14, max= 164, avg=89.00, stdev=106.07, samples=2 00:16:35.725 lat (msec) : 250=3.24%, 500=4.17%, 2000=22.22%, >=2000=70.37% 00:16:35.725 cpu : usr=0.01%, sys=0.92%, ctx=209, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:35.725 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153390: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=214, BW=214MiB/s (224MB/s)(2584MiB/12073msec) 00:16:35.725 slat (usec): min=54, max=2059.1k, avg=3869.08, stdev=61266.13 00:16:35.725 clat (msec): min=122, max=6756, avg=449.72, stdev=785.97 00:16:35.725 lat (msec): min=122, max=6768, avg=453.59, stdev=794.77 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 130], 00:16:35.725 | 30.00th=[ 131], 40.00th=[ 131], 50.00th=[ 132], 60.00th=[ 132], 00:16:35.725 | 70.00th=[ 296], 80.00th=[ 321], 90.00th=[ 2232], 95.00th=[ 2534], 00:16:35.725 | 99.00th=[ 3574], 99.50th=[ 3708], 99.90th=[ 4665], 99.95th=[ 6745], 00:16:35.725 | 99.99th=[ 6745] 00:16:35.725 bw ( KiB/s): min= 1519, max=944128, per=22.36%, avg=457331.82, stdev=351951.25, samples=11 00:16:35.725 iops : min= 1, max= 922, avg=446.45, stdev=343.86, samples=11 00:16:35.725 lat (msec) : 250=66.06%, 500=21.40%, 750=1.70%, >=2000=10.84% 00:16:35.725 cpu : usr=0.21%, sys=2.38%, ctx=2354, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.725 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153391: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=32, BW=32.1MiB/s (33.6MB/s)(390MiB/12160msec) 00:16:35.725 slat (usec): min=62, max=2115.4k, avg=25879.81, stdev=187625.59 00:16:35.725 clat (msec): min=249, max=8604, avg=2046.79, stdev=2047.87 00:16:35.725 lat (msec): min=252, max=8612, avg=2072.67, stdev=2072.75 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 255], 5.00th=[ 355], 10.00th=[ 435], 20.00th=[ 885], 00:16:35.725 | 30.00th=[ 986], 40.00th=[ 1083], 50.00th=[ 1150], 60.00th=[ 2232], 00:16:35.725 | 70.00th=[ 2299], 80.00th=[ 2366], 90.00th=[ 3071], 95.00th=[ 8557], 00:16:35.725 | 99.00th=[ 8557], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:35.725 | 99.99th=[ 8658] 00:16:35.725 bw ( KiB/s): min= 1903, max=319488, per=6.58%, avg=134619.75, stdev=134150.11, samples=4 00:16:35.725 iops : min= 1, max= 312, avg=131.25, stdev=131.29, samples=4 00:16:35.725 lat (msec) : 250=0.26%, 500=12.31%, 750=4.62%, 1000=13.08%, 2000=25.13% 00:16:35.725 lat (msec) : >=2000=44.62% 00:16:35.725 cpu : usr=0.02%, sys=0.99%, ctx=565, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:35.725 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153392: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=141, BW=141MiB/s (148MB/s)(1704MiB/12082msec) 00:16:35.725 slat (usec): min=45, max=2063.3k, avg=5881.06, stdev=89803.93 00:16:35.725 clat (msec): min=103, max=7959, avg=364.96, stdev=780.15 00:16:35.725 lat (msec): min=104, max=8107, avg=370.84, stdev=802.54 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 128], 00:16:35.725 | 30.00th=[ 130], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 148], 00:16:35.725 | 70.00th=[ 150], 80.00th=[ 182], 90.00th=[ 234], 95.00th=[ 2265], 00:16:35.725 | 99.00th=[ 2500], 99.50th=[ 4665], 99.90th=[ 6812], 99.95th=[ 7953], 00:16:35.725 | 99.99th=[ 7953] 00:16:35.725 bw ( KiB/s): min= 1519, max=1040384, per=31.56%, avg=645555.40, stdev=386145.20, samples=5 00:16:35.725 iops : min= 1, max= 1016, avg=630.20, stdev=377.28, samples=5 00:16:35.725 lat (msec) : 250=90.79%, 500=0.47%, >=2000=8.74% 00:16:35.725 cpu : usr=0.09%, sys=1.35%, ctx=1709, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.725 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153393: Thu Apr 18 13:45:38 2024 00:16:35.725 read: IOPS=69, BW=69.7MiB/s (73.1MB/s)(846MiB/12138msec) 00:16:35.725 slat (usec): min=53, max=2115.4k, avg=14206.21, stdev=146809.23 00:16:35.725 clat (msec): min=113, max=6857, avg=713.27, stdev=1000.25 00:16:35.725 lat (msec): min=154, max=8174, avg=727.47, stdev=1033.81 00:16:35.725 clat percentiles (msec): 00:16:35.725 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 157], 20.00th=[ 169], 00:16:35.725 | 30.00th=[ 203], 40.00th=[ 228], 50.00th=[ 257], 60.00th=[ 355], 00:16:35.725 | 70.00th=[ 535], 80.00th=[ 751], 90.00th=[ 2433], 95.00th=[ 2500], 00:16:35.725 | 99.00th=[ 4732], 99.50th=[ 6745], 99.90th=[ 6879], 99.95th=[ 6879], 00:16:35.725 | 99.99th=[ 6879] 00:16:35.725 bw ( KiB/s): min=153600, max=534528, per=17.96%, avg=367478.00, stdev=183647.65, samples=4 00:16:35.725 iops : min= 150, max= 522, avg=358.75, stdev=179.42, samples=4 00:16:35.725 lat (msec) : 250=46.45%, 500=22.22%, 750=11.11%, 1000=3.07%, >=2000=17.14% 00:16:35.725 cpu : usr=0.05%, sys=1.13%, ctx=1000, majf=0, minf=32769 00:16:35.725 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:16:35.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.725 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.725 job5: (groupid=0, jobs=1): err= 0: pid=1153394: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=22, BW=22.8MiB/s (24.0MB/s)(230MiB/10066msec) 00:16:35.726 slat (usec): min=54, max=2025.0k, avg=43484.96, stdev=267782.96 00:16:35.726 clat (msec): min=62, max=9904, avg=4781.68, stdev=3257.50 00:16:35.726 lat (msec): min=78, max=10038, avg=4825.16, stdev=3259.62 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 87], 5.00th=[ 133], 10.00th=[ 140], 20.00th=[ 1569], 00:16:35.726 | 30.00th=[ 2140], 40.00th=[ 4111], 50.00th=[ 4463], 60.00th=[ 6409], 00:16:35.726 | 70.00th=[ 8356], 80.00th=[ 8490], 90.00th=[ 8490], 95.00th=[ 8490], 00:16:35.726 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 9866], 99.95th=[ 9866], 00:16:35.726 | 99.99th=[ 9866] 00:16:35.726 bw ( KiB/s): min= 4096, max=75776, per=1.72%, avg=35147.67, stdev=27092.70, samples=6 00:16:35.726 iops : min= 4, max= 74, avg=34.17, stdev=26.51, samples=6 00:16:35.726 lat (msec) : 100=1.74%, 250=17.39%, 2000=3.04%, >=2000=77.83% 00:16:35.726 cpu : usr=0.02%, sys=0.79%, ctx=176, majf=0, minf=32769 00:16:35.726 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:16:35.726 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153395: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=52, BW=52.1MiB/s (54.7MB/s)(636MiB/12202msec) 00:16:35.726 slat (usec): min=55, max=2158.2k, avg=18958.14, stdev=177407.58 00:16:35.726 clat (msec): min=128, max=6570, avg=1559.53, stdev=2042.45 00:16:35.726 lat (msec): min=129, max=8589, avg=1578.49, stdev=2064.67 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 130], 5.00th=[ 134], 10.00th=[ 134], 20.00th=[ 136], 00:16:35.726 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 264], 00:16:35.726 | 70.00th=[ 1838], 80.00th=[ 4144], 90.00th=[ 5805], 95.00th=[ 5873], 00:16:35.726 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6544], 99.95th=[ 6544], 00:16:35.726 | 99.99th=[ 6544] 00:16:35.726 bw ( KiB/s): min= 2043, max=571392, per=10.17%, avg=208075.80, stdev=230324.19, samples=5 00:16:35.726 iops : min= 1, max= 558, avg=203.00, stdev=225.15, samples=5 00:16:35.726 lat (msec) : 250=58.65%, 500=2.20%, 2000=13.21%, >=2000=25.94% 00:16:35.726 cpu : usr=0.01%, sys=0.86%, ctx=661, majf=0, minf=32769 00:16:35.726 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:35.726 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153396: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=3, BW=3701KiB/s (3790kB/s)(44.0MiB/12175msec) 00:16:35.726 slat (usec): min=447, max=2150.8k, avg=273483.62, stdev=674894.32 00:16:35.726 clat (msec): min=141, max=10798, avg=5083.72, stdev=2887.24 00:16:35.726 lat (msec): min=2292, max=12174, avg=5357.21, stdev=2976.72 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 142], 5.00th=[ 2299], 10.00th=[ 2299], 20.00th=[ 2299], 00:16:35.726 | 30.00th=[ 2299], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 6477], 00:16:35.726 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[10805], 00:16:35.726 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:35.726 | 99.99th=[10805] 00:16:35.726 lat (msec) : 250=2.27%, >=2000=97.73% 00:16:35.726 cpu : usr=0.00%, sys=0.20%, ctx=75, majf=0, minf=11265 00:16:35.726 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:35.726 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153397: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(320MiB/12145msec) 00:16:35.726 slat (usec): min=59, max=2084.7k, avg=31314.84, stdev=219992.37 00:16:35.726 clat (msec): min=278, max=6090, avg=3318.00, stdev=2564.78 00:16:35.726 lat (msec): min=280, max=6093, avg=3349.32, stdev=2566.08 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 279], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 292], 00:16:35.726 | 30.00th=[ 296], 40.00th=[ 1838], 50.00th=[ 4279], 60.00th=[ 5805], 00:16:35.726 | 70.00th=[ 5873], 80.00th=[ 5940], 90.00th=[ 6007], 95.00th=[ 6074], 00:16:35.726 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:16:35.726 | 99.99th=[ 6074] 00:16:35.726 bw ( KiB/s): min= 2048, max=266240, per=3.86%, avg=79052.80, stdev=110578.73, samples=5 00:16:35.726 iops : min= 2, max= 260, avg=77.20, stdev=107.99, samples=5 00:16:35.726 lat (msec) : 500=36.88%, 2000=4.06%, >=2000=59.06% 00:16:35.726 cpu : usr=0.04%, sys=0.76%, ctx=290, majf=0, minf=32769 00:16:35.726 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:35.726 issued rwts: total=320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153398: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=144, BW=145MiB/s (152MB/s)(1746MiB/12061msec) 00:16:35.726 slat (usec): min=65, max=2047.1k, avg=5723.66, stdev=86375.56 00:16:35.726 clat (msec): min=134, max=5944, avg=854.98, stdev=1623.17 00:16:35.726 lat (msec): min=135, max=5946, avg=860.70, stdev=1628.82 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 136], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 138], 00:16:35.726 | 30.00th=[ 138], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:16:35.726 | 70.00th=[ 205], 80.00th=[ 292], 90.00th=[ 4329], 95.00th=[ 5403], 00:16:35.726 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5940], 99.95th=[ 5940], 00:16:35.726 | 99.99th=[ 5940] 00:16:35.726 bw ( KiB/s): min= 2052, max=948224, per=14.73%, avg=301419.73, stdev=378227.63, samples=11 00:16:35.726 iops : min= 2, max= 926, avg=294.09, stdev=369.58, samples=11 00:16:35.726 lat (msec) : 250=72.22%, 500=10.94%, 750=0.17%, 2000=0.74%, >=2000=15.92% 00:16:35.726 cpu : usr=0.05%, sys=1.72%, ctx=1742, majf=0, minf=32769 00:16:35.726 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.726 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153399: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(389MiB/12156msec) 00:16:35.726 slat (usec): min=62, max=2003.6k, avg=25742.15, stdev=195279.06 00:16:35.726 clat (msec): min=230, max=5949, avg=2479.57, stdev=2527.37 00:16:35.726 lat (msec): min=232, max=5951, avg=2505.32, stdev=2535.92 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 232], 5.00th=[ 255], 10.00th=[ 271], 20.00th=[ 284], 00:16:35.726 | 30.00th=[ 288], 40.00th=[ 296], 50.00th=[ 347], 60.00th=[ 2500], 00:16:35.726 | 70.00th=[ 5738], 80.00th=[ 5805], 90.00th=[ 5873], 95.00th=[ 5873], 00:16:35.726 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:16:35.726 | 99.99th=[ 5940] 00:16:35.726 bw ( KiB/s): min= 2048, max=440320, per=5.25%, avg=107311.00, stdev=187546.30, samples=5 00:16:35.726 iops : min= 2, max= 430, avg=104.60, stdev=183.28, samples=5 00:16:35.726 lat (msec) : 250=4.88%, 500=48.07%, 2000=2.83%, >=2000=44.22% 00:16:35.726 cpu : usr=0.04%, sys=0.82%, ctx=362, majf=0, minf=32769 00:16:35.726 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:16:35.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.726 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:35.726 issued rwts: total=389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.726 job5: (groupid=0, jobs=1): err= 0: pid=1153400: Thu Apr 18 13:45:38 2024 00:16:35.726 read: IOPS=6, BW=6440KiB/s (6594kB/s)(76.0MiB/12085msec) 00:16:35.726 slat (usec): min=420, max=2101.1k, avg=131653.88, stdev=465985.86 00:16:35.726 clat (msec): min=2078, max=12061, avg=7170.75, stdev=3349.46 00:16:35.726 lat (msec): min=2096, max=12084, avg=7302.41, stdev=3343.29 00:16:35.726 clat percentiles (msec): 00:16:35.726 | 1.00th=[ 2072], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:16:35.726 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:16:35.726 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[12013], 00:16:35.726 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:35.726 | 99.99th=[12013] 00:16:35.726 lat (msec) : >=2000=100.00% 00:16:35.726 cpu : usr=0.00%, sys=0.33%, ctx=87, majf=0, minf=19457 00:16:35.727 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:16:35.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.727 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:35.727 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.727 00:16:35.727 Run status group 0 (all jobs): 00:16:35.727 READ: bw=1998MiB/s (2095MB/s), 1012KiB/s-214MiB/s (1036kB/s-224MB/s), io=28.0GiB (30.1GB), run=10066-14357msec 00:16:35.727 00:16:35.727 Disk stats (read/write): 00:16:35.727 nvme0n1: ios=22539/0, merge=0/0, ticks=11167469/0, in_queue=11167469, util=98.81% 00:16:35.727 nvme1n1: ios=10012/0, merge=0/0, ticks=11674724/0, in_queue=11674724, util=99.02% 00:16:35.727 nvme2n1: ios=20600/0, merge=0/0, ticks=12241794/0, in_queue=12241794, util=99.06% 00:16:35.727 nvme3n1: ios=53053/0, merge=0/0, ticks=10208541/0, in_queue=10208541, util=99.17% 00:16:35.727 nvme4n1: ios=41270/0, merge=0/0, ticks=14374517/0, in_queue=14374517, util=99.23% 00:16:35.727 nvme5n1: ios=80546/0, merge=0/0, ticks=9974834/0, in_queue=9974834, util=99.03% 00:16:35.984 13:45:38 -- target/srq_overwhelm.sh@38 -- # sync 00:16:35.984 13:45:38 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:35.984 13:45:38 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:35.984 13:45:38 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:36.914 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.914 13:45:39 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:36.914 13:45:39 -- common/autotest_common.sh@1205 -- # local i=0 00:16:36.914 13:45:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:36.914 13:45:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:16:36.914 13:45:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:36.914 13:45:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:16:36.914 13:45:39 -- common/autotest_common.sh@1217 -- # return 0 00:16:36.914 13:45:39 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:36.914 13:45:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.914 13:45:39 -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 13:45:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.914 13:45:39 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:36.914 13:45:39 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.284 13:45:40 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:38.284 13:45:40 -- common/autotest_common.sh@1205 -- # local i=0 00:16:38.284 13:45:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:38.284 13:45:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:16:38.284 13:45:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:38.284 13:45:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:16:38.284 13:45:40 -- common/autotest_common.sh@1217 -- # return 0 00:16:38.284 13:45:40 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.284 13:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.284 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:16:38.284 13:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.284 13:45:40 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:38.284 13:45:40 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:39.216 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:39.216 13:45:41 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:39.216 13:45:41 -- common/autotest_common.sh@1205 -- # local i=0 00:16:39.216 13:45:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:39.216 13:45:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:16:39.216 13:45:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:39.216 13:45:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:16:39.216 13:45:41 -- common/autotest_common.sh@1217 -- # return 0 00:16:39.216 13:45:41 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:39.216 13:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.216 13:45:41 -- common/autotest_common.sh@10 -- # set +x 00:16:39.216 13:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.216 13:45:41 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:39.216 13:45:41 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:40.585 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:40.585 13:45:43 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:40.585 13:45:43 -- common/autotest_common.sh@1205 -- # local i=0 00:16:40.585 13:45:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:40.585 13:45:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:16:40.585 13:45:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:40.585 13:45:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:16:40.585 13:45:43 -- common/autotest_common.sh@1217 -- # return 0 00:16:40.585 13:45:43 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:40.585 13:45:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.585 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.585 13:45:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.585 13:45:43 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:40.585 13:45:43 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:41.513 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:41.513 13:45:44 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:41.513 13:45:44 -- common/autotest_common.sh@1205 -- # local i=0 00:16:41.513 13:45:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:41.513 13:45:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:16:41.513 13:45:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:41.513 13:45:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:16:41.513 13:45:44 -- common/autotest_common.sh@1217 -- # return 0 00:16:41.513 13:45:44 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:41.513 13:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.513 13:45:44 -- common/autotest_common.sh@10 -- # set +x 00:16:41.513 13:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.513 13:45:44 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:41.513 13:45:44 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:42.881 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:42.881 13:45:45 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:42.881 13:45:45 -- common/autotest_common.sh@1205 -- # local i=0 00:16:42.881 13:45:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:42.881 13:45:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:16:42.882 13:45:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:42.882 13:45:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:16:42.882 13:45:45 -- common/autotest_common.sh@1217 -- # return 0 00:16:42.882 13:45:45 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:42.882 13:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.882 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:42.882 13:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.882 13:45:45 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:42.882 13:45:45 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:42.882 13:45:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.882 13:45:45 -- nvmf/common.sh@117 -- # sync 00:16:42.882 13:45:45 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:42.882 13:45:45 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:42.882 13:45:45 -- nvmf/common.sh@120 -- # set +e 00:16:42.882 13:45:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.882 13:45:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:42.882 rmmod nvme_rdma 00:16:42.882 rmmod nvme_fabrics 00:16:42.882 13:45:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.882 13:45:45 -- nvmf/common.sh@124 -- # set -e 00:16:42.882 13:45:45 -- nvmf/common.sh@125 -- # return 0 00:16:42.882 13:45:45 -- nvmf/common.sh@478 -- # '[' -n 1152283 ']' 00:16:42.882 13:45:45 -- nvmf/common.sh@479 -- # killprocess 1152283 00:16:42.882 13:45:45 -- common/autotest_common.sh@936 -- # '[' -z 1152283 ']' 00:16:42.882 13:45:45 -- common/autotest_common.sh@940 -- # kill -0 1152283 00:16:42.882 13:45:45 -- common/autotest_common.sh@941 -- # uname 00:16:42.882 13:45:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.882 13:45:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1152283 00:16:42.882 13:45:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.882 13:45:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.882 13:45:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1152283' 00:16:42.882 killing process with pid 1152283 00:16:42.882 13:45:45 -- common/autotest_common.sh@955 -- # kill 1152283 00:16:42.882 13:45:45 -- common/autotest_common.sh@960 -- # wait 1152283 00:16:43.139 13:45:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.139 13:45:45 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:43.139 00:16:43.139 real 0m33.477s 00:16:43.139 user 2m1.196s 00:16:43.139 sys 0m10.492s 00:16:43.139 13:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.139 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:43.139 ************************************ 00:16:43.139 END TEST nvmf_srq_overwhelm 00:16:43.139 ************************************ 00:16:43.139 13:45:45 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:43.139 13:45:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.139 13:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.139 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:43.397 ************************************ 00:16:43.397 START TEST nvmf_shutdown 00:16:43.397 ************************************ 00:16:43.397 13:45:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:43.397 * Looking for test storage... 00:16:43.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:43.397 13:45:46 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.397 13:45:46 -- nvmf/common.sh@7 -- # uname -s 00:16:43.397 13:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.397 13:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.397 13:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.397 13:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.397 13:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.397 13:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.397 13:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.397 13:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.397 13:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.397 13:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.397 13:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:16:43.397 13:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:16:43.397 13:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.397 13:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.397 13:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.397 13:45:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.397 13:45:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:43.397 13:45:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.397 13:45:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.397 13:45:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.397 13:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.397 13:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.397 13:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.397 13:45:46 -- paths/export.sh@5 -- # export PATH 00:16:43.397 13:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.397 13:45:46 -- nvmf/common.sh@47 -- # : 0 00:16:43.397 13:45:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.397 13:45:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.397 13:45:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.397 13:45:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.397 13:45:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.397 13:45:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.397 13:45:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.397 13:45:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.397 13:45:46 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.397 13:45:46 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.397 13:45:46 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:43.397 13:45:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:43.397 13:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.397 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:16:43.655 ************************************ 00:16:43.655 START TEST nvmf_shutdown_tc1 00:16:43.655 ************************************ 00:16:43.655 13:45:46 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:43.655 13:45:46 -- target/shutdown.sh@74 -- # starttarget 00:16:43.655 13:45:46 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:43.655 13:45:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:43.655 13:45:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.655 13:45:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:43.655 13:45:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:43.655 13:45:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:43.655 13:45:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.655 13:45:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.655 13:45:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.655 13:45:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:43.655 13:45:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:43.655 13:45:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.655 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:16:46.182 13:45:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.182 13:45:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.182 13:45:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.182 13:45:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.182 13:45:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.182 13:45:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.182 13:45:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.182 13:45:48 -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.182 13:45:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.182 13:45:48 -- nvmf/common.sh@296 -- # e810=() 00:16:46.182 13:45:48 -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.182 13:45:48 -- nvmf/common.sh@297 -- # x722=() 00:16:46.182 13:45:48 -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.182 13:45:48 -- nvmf/common.sh@298 -- # mlx=() 00:16:46.182 13:45:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.182 13:45:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.182 13:45:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.182 13:45:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.182 13:45:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:46.182 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:46.182 13:45:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.182 13:45:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.182 13:45:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:46.182 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:46.182 13:45:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.182 13:45:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.182 13:45:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.182 13:45:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.182 13:45:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.182 13:45:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.182 13:45:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:46.182 Found net devices under 0000:81:00.0: mlx_0_0 00:16:46.182 13:45:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.182 13:45:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.182 13:45:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.182 13:45:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.182 13:45:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:46.182 Found net devices under 0000:81:00.1: mlx_0_1 00:16:46.182 13:45:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.182 13:45:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:46.182 13:45:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:46.182 13:45:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:46.182 13:45:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:46.182 13:45:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:46.182 13:45:48 -- nvmf/common.sh@58 -- # uname 00:16:46.182 13:45:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:46.182 13:45:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:46.182 13:45:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:46.182 13:45:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:46.182 13:45:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:46.182 13:45:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:46.183 13:45:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:46.183 13:45:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:46.183 13:45:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:46.183 13:45:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:46.183 13:45:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:46.183 13:45:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.183 13:45:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:46.183 13:45:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:46.183 13:45:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.183 13:45:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:46.183 13:45:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@105 -- # continue 2 00:16:46.183 13:45:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@105 -- # continue 2 00:16:46.183 13:45:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:46.183 13:45:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.183 13:45:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:46.183 13:45:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:46.183 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.183 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:46.183 altname enp129s0f0np0 00:16:46.183 inet 192.168.100.8/24 scope global mlx_0_0 00:16:46.183 valid_lft forever preferred_lft forever 00:16:46.183 13:45:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:46.183 13:45:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.183 13:45:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:46.183 13:45:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:46.183 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.183 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:46.183 altname enp129s0f1np1 00:16:46.183 inet 192.168.100.9/24 scope global mlx_0_1 00:16:46.183 valid_lft forever preferred_lft forever 00:16:46.183 13:45:48 -- nvmf/common.sh@411 -- # return 0 00:16:46.183 13:45:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:46.183 13:45:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:46.183 13:45:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:46.183 13:45:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:46.183 13:45:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.183 13:45:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:46.183 13:45:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:46.183 13:45:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.183 13:45:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:46.183 13:45:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@105 -- # continue 2 00:16:46.183 13:45:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.183 13:45:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.183 13:45:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@105 -- # continue 2 00:16:46.183 13:45:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:46.183 13:45:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.183 13:45:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:46.183 13:45:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.183 13:45:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.183 13:45:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:46.183 192.168.100.9' 00:16:46.183 13:45:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:46.183 192.168.100.9' 00:16:46.183 13:45:48 -- nvmf/common.sh@446 -- # head -n 1 00:16:46.183 13:45:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:46.183 13:45:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:46.183 192.168.100.9' 00:16:46.183 13:45:48 -- nvmf/common.sh@447 -- # tail -n +2 00:16:46.183 13:45:48 -- nvmf/common.sh@447 -- # head -n 1 00:16:46.183 13:45:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:46.183 13:45:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:46.183 13:45:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:46.183 13:45:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:46.441 13:45:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:46.441 13:45:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:46.441 13:45:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:46.441 13:45:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:46.441 13:45:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.441 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.441 13:45:49 -- nvmf/common.sh@470 -- # nvmfpid=1158043 00:16:46.441 13:45:49 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:46.441 13:45:49 -- nvmf/common.sh@471 -- # waitforlisten 1158043 00:16:46.441 13:45:49 -- common/autotest_common.sh@817 -- # '[' -z 1158043 ']' 00:16:46.441 13:45:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.441 13:45:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.441 13:45:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.441 13:45:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.441 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.441 [2024-04-18 13:45:49.051754] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:46.441 [2024-04-18 13:45:49.051846] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.441 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.441 [2024-04-18 13:45:49.136096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.699 [2024-04-18 13:45:49.261904] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.699 [2024-04-18 13:45:49.261967] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.699 [2024-04-18 13:45:49.261985] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.699 [2024-04-18 13:45:49.261998] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.699 [2024-04-18 13:45:49.262010] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.699 [2024-04-18 13:45:49.262078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.699 [2024-04-18 13:45:49.262133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.699 [2024-04-18 13:45:49.262191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.699 [2024-04-18 13:45:49.262187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:46.699 13:45:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.699 13:45:49 -- common/autotest_common.sh@850 -- # return 0 00:16:46.699 13:45:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:46.699 13:45:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:46.699 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.699 13:45:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.699 13:45:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:46.699 13:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.699 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.699 [2024-04-18 13:45:49.460757] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaaf350/0xab3840) succeed. 00:16:46.699 [2024-04-18 13:45:49.473082] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xab0940/0xaf4ed0) succeed. 00:16:46.956 13:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.956 13:45:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:46.956 13:45:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:46.956 13:45:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.956 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.956 13:45:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.956 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.956 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.957 13:45:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:46.957 13:45:49 -- target/shutdown.sh@28 -- # cat 00:16:46.957 13:45:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:46.957 13:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.957 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.957 Malloc1 00:16:46.957 [2024-04-18 13:45:49.737749] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.213 Malloc2 00:16:47.213 Malloc3 00:16:47.213 Malloc4 00:16:47.213 Malloc5 00:16:47.213 Malloc6 00:16:47.472 Malloc7 00:16:47.472 Malloc8 00:16:47.472 Malloc9 00:16:47.472 Malloc10 00:16:47.472 13:45:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.472 13:45:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:47.472 13:45:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.472 13:45:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.472 13:45:50 -- target/shutdown.sh@78 -- # perfpid=1158223 00:16:47.472 13:45:50 -- target/shutdown.sh@79 -- # waitforlisten 1158223 /var/tmp/bdevperf.sock 00:16:47.472 13:45:50 -- common/autotest_common.sh@817 -- # '[' -z 1158223 ']' 00:16:47.472 13:45:50 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:47.472 13:45:50 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:47.472 13:45:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.472 13:45:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.472 13:45:50 -- nvmf/common.sh@521 -- # config=() 00:16:47.472 13:45:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.472 13:45:50 -- nvmf/common.sh@521 -- # local subsystem config 00:16:47.472 13:45:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.472 )") 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.472 )") 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.472 )") 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.472 )") 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.472 )") 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.472 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.472 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.472 { 00:16:47.472 "params": { 00:16:47.472 "name": "Nvme$subsystem", 00:16:47.472 "trtype": "$TEST_TRANSPORT", 00:16:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.472 "adrfam": "ipv4", 00:16:47.472 "trsvcid": "$NVMF_PORT", 00:16:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.472 "hdgst": ${hdgst:-false}, 00:16:47.472 "ddgst": ${ddgst:-false} 00:16:47.472 }, 00:16:47.472 "method": "bdev_nvme_attach_controller" 00:16:47.472 } 00:16:47.472 EOF 00:16:47.473 )") 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.473 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.473 { 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme$subsystem", 00:16:47.473 "trtype": "$TEST_TRANSPORT", 00:16:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "$NVMF_PORT", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.473 "hdgst": ${hdgst:-false}, 00:16:47.473 "ddgst": ${ddgst:-false} 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 } 00:16:47.473 EOF 00:16:47.473 )") 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.473 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.473 { 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme$subsystem", 00:16:47.473 "trtype": "$TEST_TRANSPORT", 00:16:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "$NVMF_PORT", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.473 "hdgst": ${hdgst:-false}, 00:16:47.473 "ddgst": ${ddgst:-false} 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 } 00:16:47.473 EOF 00:16:47.473 )") 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.473 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.473 { 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme$subsystem", 00:16:47.473 "trtype": "$TEST_TRANSPORT", 00:16:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "$NVMF_PORT", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.473 "hdgst": ${hdgst:-false}, 00:16:47.473 "ddgst": ${ddgst:-false} 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 } 00:16:47.473 EOF 00:16:47.473 )") 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.473 13:45:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:47.473 { 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme$subsystem", 00:16:47.473 "trtype": "$TEST_TRANSPORT", 00:16:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "$NVMF_PORT", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:47.473 "hdgst": ${hdgst:-false}, 00:16:47.473 "ddgst": ${ddgst:-false} 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 } 00:16:47.473 EOF 00:16:47.473 )") 00:16:47.473 13:45:50 -- nvmf/common.sh@543 -- # cat 00:16:47.473 13:45:50 -- nvmf/common.sh@545 -- # jq . 00:16:47.473 13:45:50 -- nvmf/common.sh@546 -- # IFS=, 00:16:47.473 13:45:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme1", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme2", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme3", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme4", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme5", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme6", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme7", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme8", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme9", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 },{ 00:16:47.473 "params": { 00:16:47.473 "name": "Nvme10", 00:16:47.473 "trtype": "rdma", 00:16:47.473 "traddr": "192.168.100.8", 00:16:47.473 "adrfam": "ipv4", 00:16:47.473 "trsvcid": "4420", 00:16:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:47.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:47.473 "hdgst": false, 00:16:47.473 "ddgst": false 00:16:47.473 }, 00:16:47.473 "method": "bdev_nvme_attach_controller" 00:16:47.473 }' 00:16:47.473 [2024-04-18 13:45:50.264501] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:47.473 [2024-04-18 13:45:50.264587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:47.731 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.731 [2024-04-18 13:45:50.343527] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.731 [2024-04-18 13:45:50.462424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.661 13:45:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.661 13:45:51 -- common/autotest_common.sh@850 -- # return 0 00:16:48.661 13:45:51 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:48.661 13:45:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.661 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:16:48.661 13:45:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.661 13:45:51 -- target/shutdown.sh@83 -- # kill -9 1158223 00:16:48.661 13:45:51 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:48.661 13:45:51 -- target/shutdown.sh@87 -- # sleep 1 00:16:49.594 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1158223 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:49.594 13:45:52 -- target/shutdown.sh@88 -- # kill -0 1158043 00:16:49.594 13:45:52 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:49.594 13:45:52 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:49.594 13:45:52 -- nvmf/common.sh@521 -- # config=() 00:16:49.594 13:45:52 -- nvmf/common.sh@521 -- # local subsystem config 00:16:49.594 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.594 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.594 { 00:16:49.594 "params": { 00:16:49.594 "name": "Nvme$subsystem", 00:16:49.594 "trtype": "$TEST_TRANSPORT", 00:16:49.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.594 "adrfam": "ipv4", 00:16:49.594 "trsvcid": "$NVMF_PORT", 00:16:49.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.594 "hdgst": ${hdgst:-false}, 00:16:49.594 "ddgst": ${ddgst:-false} 00:16:49.594 }, 00:16:49.594 "method": "bdev_nvme_attach_controller" 00:16:49.594 } 00:16:49.594 EOF 00:16:49.594 )") 00:16:49.594 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.595 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.595 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.595 { 00:16:49.595 "params": { 00:16:49.595 "name": "Nvme$subsystem", 00:16:49.595 "trtype": "$TEST_TRANSPORT", 00:16:49.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.595 "adrfam": "ipv4", 00:16:49.595 "trsvcid": "$NVMF_PORT", 00:16:49.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.595 "hdgst": ${hdgst:-false}, 00:16:49.595 "ddgst": ${ddgst:-false} 00:16:49.595 }, 00:16:49.595 "method": "bdev_nvme_attach_controller" 00:16:49.595 } 00:16:49.595 EOF 00:16:49.595 )") 00:16:49.893 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.893 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.893 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.893 { 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme$subsystem", 00:16:49.893 "trtype": "$TEST_TRANSPORT", 00:16:49.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "$NVMF_PORT", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.893 "hdgst": ${hdgst:-false}, 00:16:49.893 "ddgst": ${ddgst:-false} 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 } 00:16:49.893 EOF 00:16:49.893 )") 00:16:49.893 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.893 13:45:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:49.893 13:45:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:49.893 { 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme$subsystem", 00:16:49.893 "trtype": "$TEST_TRANSPORT", 00:16:49.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "$NVMF_PORT", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.893 "hdgst": ${hdgst:-false}, 00:16:49.893 "ddgst": ${ddgst:-false} 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 } 00:16:49.893 EOF 00:16:49.893 )") 00:16:49.893 13:45:52 -- nvmf/common.sh@543 -- # cat 00:16:49.893 13:45:52 -- nvmf/common.sh@545 -- # jq . 00:16:49.893 13:45:52 -- nvmf/common.sh@546 -- # IFS=, 00:16:49.893 13:45:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme1", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme2", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme3", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme4", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme5", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme6", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme7", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme8", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme9", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 },{ 00:16:49.893 "params": { 00:16:49.893 "name": "Nvme10", 00:16:49.893 "trtype": "rdma", 00:16:49.893 "traddr": "192.168.100.8", 00:16:49.893 "adrfam": "ipv4", 00:16:49.893 "trsvcid": "4420", 00:16:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:49.893 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:49.893 "hdgst": false, 00:16:49.893 "ddgst": false 00:16:49.893 }, 00:16:49.893 "method": "bdev_nvme_attach_controller" 00:16:49.893 }' 00:16:49.893 [2024-04-18 13:45:52.415711] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:49.893 [2024-04-18 13:45:52.415813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158515 ] 00:16:49.893 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.893 [2024-04-18 13:45:52.507159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.893 [2024-04-18 13:45:52.630287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.825 Running I/O for 1 seconds... 00:16:52.196 00:16:52.196 Latency(us) 00:16:52.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.196 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme1n1 : 1.20 288.52 18.03 0.00 0.00 215078.67 22524.97 237677.23 00:16:52.196 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme2n1 : 1.20 281.52 17.60 0.00 0.00 215893.26 29321.29 223696.21 00:16:52.196 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme3n1 : 1.20 293.64 18.35 0.00 0.00 205074.17 36700.16 215928.98 00:16:52.196 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme4n1 : 1.20 301.55 18.85 0.00 0.00 196229.43 6844.87 201947.97 00:16:52.196 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme5n1 : 1.20 279.60 17.48 0.00 0.00 206128.64 38641.97 192627.29 00:16:52.196 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme6n1 : 1.20 279.32 17.46 0.00 0.00 202717.76 38253.61 179423.00 00:16:52.196 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme7n1 : 1.21 319.66 19.98 0.00 0.00 178354.79 5218.61 170102.33 00:16:52.196 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme8n1 : 1.21 334.97 20.94 0.00 0.00 166985.75 5315.70 143693.75 00:16:52.196 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme9n1 : 1.21 316.46 19.78 0.00 0.00 173576.69 5242.88 145247.19 00:16:52.196 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.196 Verification LBA range: start 0x0 length 0x400 00:16:52.196 Nvme10n1 : 1.21 316.14 19.76 0.00 0.00 170263.58 4951.61 138256.69 00:16:52.196 =================================================================================================================== 00:16:52.196 Total : 3011.38 188.21 0.00 0.00 191900.22 4951.61 237677.23 00:16:52.453 13:45:55 -- target/shutdown.sh@94 -- # stoptarget 00:16:52.453 13:45:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:52.453 13:45:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:52.453 13:45:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:52.453 13:45:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:52.453 13:45:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:52.453 13:45:55 -- nvmf/common.sh@117 -- # sync 00:16:52.453 13:45:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:52.453 13:45:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:52.453 13:45:55 -- nvmf/common.sh@120 -- # set +e 00:16:52.453 13:45:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.453 13:45:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:52.453 rmmod nvme_rdma 00:16:52.453 rmmod nvme_fabrics 00:16:52.453 13:45:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.453 13:45:55 -- nvmf/common.sh@124 -- # set -e 00:16:52.453 13:45:55 -- nvmf/common.sh@125 -- # return 0 00:16:52.453 13:45:55 -- nvmf/common.sh@478 -- # '[' -n 1158043 ']' 00:16:52.453 13:45:55 -- nvmf/common.sh@479 -- # killprocess 1158043 00:16:52.453 13:45:55 -- common/autotest_common.sh@936 -- # '[' -z 1158043 ']' 00:16:52.453 13:45:55 -- common/autotest_common.sh@940 -- # kill -0 1158043 00:16:52.453 13:45:55 -- common/autotest_common.sh@941 -- # uname 00:16:52.453 13:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.453 13:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1158043 00:16:52.453 13:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:52.453 13:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:52.453 13:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1158043' 00:16:52.453 killing process with pid 1158043 00:16:52.453 13:45:55 -- common/autotest_common.sh@955 -- # kill 1158043 00:16:52.453 13:45:55 -- common/autotest_common.sh@960 -- # wait 1158043 00:16:53.386 13:45:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:53.386 13:45:55 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:53.386 00:16:53.386 real 0m9.627s 00:16:53.386 user 0m30.113s 00:16:53.386 sys 0m3.204s 00:16:53.386 13:45:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:53.386 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.386 ************************************ 00:16:53.386 END TEST nvmf_shutdown_tc1 00:16:53.386 ************************************ 00:16:53.386 13:45:55 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:53.386 13:45:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:53.386 13:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.386 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.386 ************************************ 00:16:53.386 START TEST nvmf_shutdown_tc2 00:16:53.386 ************************************ 00:16:53.386 13:45:56 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:53.386 13:45:56 -- target/shutdown.sh@99 -- # starttarget 00:16:53.386 13:45:56 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:53.386 13:45:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:53.386 13:45:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.386 13:45:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:53.386 13:45:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:53.386 13:45:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.386 13:45:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.386 13:45:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.386 13:45:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.386 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.386 13:45:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:53.386 13:45:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.386 13:45:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.386 13:45:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.386 13:45:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.386 13:45:56 -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.386 13:45:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@296 -- # e810=() 00:16:53.386 13:45:56 -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.386 13:45:56 -- nvmf/common.sh@297 -- # x722=() 00:16:53.386 13:45:56 -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.386 13:45:56 -- nvmf/common.sh@298 -- # mlx=() 00:16:53.386 13:45:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.386 13:45:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.386 13:45:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:53.386 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:53.386 13:45:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:53.386 13:45:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:53.386 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:53.386 13:45:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:53.386 13:45:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.386 13:45:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.386 13:45:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:53.386 Found net devices under 0000:81:00.0: mlx_0_0 00:16:53.386 13:45:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.386 13:45:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.386 13:45:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:53.386 Found net devices under 0000:81:00.1: mlx_0_1 00:16:53.386 13:45:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.386 13:45:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:53.386 13:45:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:53.386 13:45:56 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:53.386 13:45:56 -- nvmf/common.sh@58 -- # uname 00:16:53.386 13:45:56 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:53.386 13:45:56 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:53.386 13:45:56 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:53.386 13:45:56 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:53.386 13:45:56 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:53.386 13:45:56 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:53.386 13:45:56 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:53.386 13:45:56 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:53.386 13:45:56 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:53.386 13:45:56 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:53.386 13:45:56 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:53.386 13:45:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:53.386 13:45:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:53.386 13:45:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:53.386 13:45:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:53.386 13:45:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:53.386 13:45:56 -- nvmf/common.sh@105 -- # continue 2 00:16:53.386 13:45:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.386 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:53.386 13:45:56 -- nvmf/common.sh@105 -- # continue 2 00:16:53.386 13:45:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:53.386 13:45:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:53.386 13:45:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.386 13:45:56 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:53.386 13:45:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:53.386 13:45:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:53.386 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:53.386 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:53.386 altname enp129s0f0np0 00:16:53.386 inet 192.168.100.8/24 scope global mlx_0_0 00:16:53.386 valid_lft forever preferred_lft forever 00:16:53.386 13:45:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:53.386 13:45:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:53.386 13:45:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.386 13:45:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.386 13:45:56 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:53.386 13:45:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:53.387 13:45:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:53.387 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:53.387 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:53.387 altname enp129s0f1np1 00:16:53.387 inet 192.168.100.9/24 scope global mlx_0_1 00:16:53.387 valid_lft forever preferred_lft forever 00:16:53.387 13:45:56 -- nvmf/common.sh@411 -- # return 0 00:16:53.387 13:45:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:53.387 13:45:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:53.387 13:45:56 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:53.387 13:45:56 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:53.387 13:45:56 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:53.387 13:45:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:53.387 13:45:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:53.387 13:45:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:53.387 13:45:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:53.387 13:45:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:53.387 13:45:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.387 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.387 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:53.387 13:45:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:53.387 13:45:56 -- nvmf/common.sh@105 -- # continue 2 00:16:53.387 13:45:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.387 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.387 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:53.387 13:45:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.387 13:45:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:53.387 13:45:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:53.387 13:45:56 -- nvmf/common.sh@105 -- # continue 2 00:16:53.387 13:45:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:53.387 13:45:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:53.387 13:45:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.387 13:45:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:53.387 13:45:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:53.387 13:45:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.387 13:45:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.387 13:45:56 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:53.387 192.168.100.9' 00:16:53.387 13:45:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:53.387 192.168.100.9' 00:16:53.387 13:45:56 -- nvmf/common.sh@446 -- # head -n 1 00:16:53.387 13:45:56 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:53.387 13:45:56 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:53.387 192.168.100.9' 00:16:53.387 13:45:56 -- nvmf/common.sh@447 -- # tail -n +2 00:16:53.387 13:45:56 -- nvmf/common.sh@447 -- # head -n 1 00:16:53.387 13:45:56 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:53.387 13:45:56 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:53.387 13:45:56 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:53.387 13:45:56 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:53.387 13:45:56 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:53.387 13:45:56 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:53.646 13:45:56 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:53.646 13:45:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:53.646 13:45:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:53.646 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 13:45:56 -- nvmf/common.sh@470 -- # nvmfpid=1159029 00:16:53.646 13:45:56 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:53.646 13:45:56 -- nvmf/common.sh@471 -- # waitforlisten 1159029 00:16:53.646 13:45:56 -- common/autotest_common.sh@817 -- # '[' -z 1159029 ']' 00:16:53.646 13:45:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.646 13:45:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.646 13:45:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.646 13:45:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.646 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.646 [2024-04-18 13:45:56.243064] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:53.646 [2024-04-18 13:45:56.243150] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.646 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.646 [2024-04-18 13:45:56.323865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.646 [2024-04-18 13:45:56.448976] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.646 [2024-04-18 13:45:56.449057] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.646 [2024-04-18 13:45:56.449074] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.646 [2024-04-18 13:45:56.449088] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.646 [2024-04-18 13:45:56.449100] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.646 [2024-04-18 13:45:56.449162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.646 [2024-04-18 13:45:56.449213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.646 [2024-04-18 13:45:56.449267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:53.646 [2024-04-18 13:45:56.449271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.904 13:45:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:53.904 13:45:56 -- common/autotest_common.sh@850 -- # return 0 00:16:53.904 13:45:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:53.904 13:45:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:53.904 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.904 13:45:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.904 13:45:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:53.904 13:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.904 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.904 [2024-04-18 13:45:56.645141] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2528350/0x252c840) succeed. 00:16:53.904 [2024-04-18 13:45:56.657398] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2529940/0x256ded0) succeed. 00:16:54.162 13:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.162 13:45:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:54.162 13:45:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:54.162 13:45:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:54.162 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:54.162 13:45:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.162 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.162 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.163 13:45:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:54.163 13:45:56 -- target/shutdown.sh@28 -- # cat 00:16:54.163 13:45:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:54.163 13:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.163 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:16:54.163 Malloc1 00:16:54.163 [2024-04-18 13:45:56.890519] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:54.163 Malloc2 00:16:54.420 Malloc3 00:16:54.420 Malloc4 00:16:54.420 Malloc5 00:16:54.420 Malloc6 00:16:54.420 Malloc7 00:16:54.685 Malloc8 00:16:54.685 Malloc9 00:16:54.685 Malloc10 00:16:54.685 13:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.685 13:45:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:54.685 13:45:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.685 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:16:54.685 13:45:57 -- target/shutdown.sh@103 -- # perfpid=1159212 00:16:54.685 13:45:57 -- target/shutdown.sh@104 -- # waitforlisten 1159212 /var/tmp/bdevperf.sock 00:16:54.685 13:45:57 -- common/autotest_common.sh@817 -- # '[' -z 1159212 ']' 00:16:54.685 13:45:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.685 13:45:57 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:54.685 13:45:57 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:54.685 13:45:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.685 13:45:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.685 13:45:57 -- nvmf/common.sh@521 -- # config=() 00:16:54.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.685 13:45:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.685 13:45:57 -- nvmf/common.sh@521 -- # local subsystem config 00:16:54.685 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.685 "hdgst": ${hdgst:-false}, 00:16:54.685 "ddgst": ${ddgst:-false} 00:16:54.685 }, 00:16:54.685 "method": "bdev_nvme_attach_controller" 00:16:54.685 } 00:16:54.685 EOF 00:16:54.685 )") 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.685 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.685 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.685 { 00:16:54.685 "params": { 00:16:54.685 "name": "Nvme$subsystem", 00:16:54.685 "trtype": "$TEST_TRANSPORT", 00:16:54.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.685 "adrfam": "ipv4", 00:16:54.685 "trsvcid": "$NVMF_PORT", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.686 "hdgst": ${hdgst:-false}, 00:16:54.686 "ddgst": ${ddgst:-false} 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 } 00:16:54.686 EOF 00:16:54.686 )") 00:16:54.686 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.686 13:45:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.686 13:45:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.686 { 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme$subsystem", 00:16:54.686 "trtype": "$TEST_TRANSPORT", 00:16:54.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "$NVMF_PORT", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.686 "hdgst": ${hdgst:-false}, 00:16:54.686 "ddgst": ${ddgst:-false} 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 } 00:16:54.686 EOF 00:16:54.686 )") 00:16:54.686 13:45:57 -- nvmf/common.sh@543 -- # cat 00:16:54.686 13:45:57 -- nvmf/common.sh@545 -- # jq . 00:16:54.686 13:45:57 -- nvmf/common.sh@546 -- # IFS=, 00:16:54.686 13:45:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme1", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme2", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme3", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme4", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme5", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme6", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme7", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme8", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme9", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 },{ 00:16:54.686 "params": { 00:16:54.686 "name": "Nvme10", 00:16:54.686 "trtype": "rdma", 00:16:54.686 "traddr": "192.168.100.8", 00:16:54.686 "adrfam": "ipv4", 00:16:54.686 "trsvcid": "4420", 00:16:54.686 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:54.686 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:54.686 "hdgst": false, 00:16:54.686 "ddgst": false 00:16:54.686 }, 00:16:54.686 "method": "bdev_nvme_attach_controller" 00:16:54.686 }' 00:16:54.686 [2024-04-18 13:45:57.417417] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:54.686 [2024-04-18 13:45:57.417511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159212 ] 00:16:54.686 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.944 [2024-04-18 13:45:57.498710] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.944 [2024-04-18 13:45:57.619303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.875 Running I/O for 10 seconds... 00:16:55.875 13:45:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:55.875 13:45:58 -- common/autotest_common.sh@850 -- # return 0 00:16:55.875 13:45:58 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:55.875 13:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.875 13:45:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.156 13:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.156 13:45:58 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:56.156 13:45:58 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:56.156 13:45:58 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:56.156 13:45:58 -- target/shutdown.sh@57 -- # local ret=1 00:16:56.156 13:45:58 -- target/shutdown.sh@58 -- # local i 00:16:56.156 13:45:58 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:56.156 13:45:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:56.156 13:45:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:56.156 13:45:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:56.156 13:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.156 13:45:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.156 13:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.156 13:45:58 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:56.156 13:45:58 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:56.156 13:45:58 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:56.412 13:45:59 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:56.412 13:45:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:56.412 13:45:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:56.412 13:45:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:56.412 13:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.412 13:45:59 -- common/autotest_common.sh@10 -- # set +x 00:16:56.412 13:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.412 13:45:59 -- target/shutdown.sh@60 -- # read_io_count=92 00:16:56.412 13:45:59 -- target/shutdown.sh@63 -- # '[' 92 -ge 100 ']' 00:16:56.412 13:45:59 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:56.669 13:45:59 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:56.669 13:45:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:56.669 13:45:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:56.669 13:45:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:56.669 13:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.669 13:45:59 -- common/autotest_common.sh@10 -- # set +x 00:16:56.926 13:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.926 13:45:59 -- target/shutdown.sh@60 -- # read_io_count=220 00:16:56.926 13:45:59 -- target/shutdown.sh@63 -- # '[' 220 -ge 100 ']' 00:16:56.926 13:45:59 -- target/shutdown.sh@64 -- # ret=0 00:16:56.926 13:45:59 -- target/shutdown.sh@65 -- # break 00:16:56.926 13:45:59 -- target/shutdown.sh@69 -- # return 0 00:16:56.926 13:45:59 -- target/shutdown.sh@110 -- # killprocess 1159212 00:16:56.926 13:45:59 -- common/autotest_common.sh@936 -- # '[' -z 1159212 ']' 00:16:56.926 13:45:59 -- common/autotest_common.sh@940 -- # kill -0 1159212 00:16:56.926 13:45:59 -- common/autotest_common.sh@941 -- # uname 00:16:56.926 13:45:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.926 13:45:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1159212 00:16:56.926 13:45:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.926 13:45:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.926 13:45:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1159212' 00:16:56.926 killing process with pid 1159212 00:16:56.926 13:45:59 -- common/autotest_common.sh@955 -- # kill 1159212 00:16:56.926 13:45:59 -- common/autotest_common.sh@960 -- # wait 1159212 00:16:57.183 Received shutdown signal, test time was about 1.279939 seconds 00:16:57.183 00:16:57.183 Latency(us) 00:16:57.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme1n1 : 1.26 274.35 17.15 0.00 0.00 229420.96 12379.02 243891.01 00:16:57.183 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme2n1 : 1.26 273.16 17.07 0.00 0.00 226718.43 12427.57 231463.44 00:16:57.183 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme3n1 : 1.26 285.41 17.84 0.00 0.00 213720.41 5388.52 222142.77 00:16:57.183 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme4n1 : 1.26 303.89 18.99 0.00 0.00 197976.56 8980.86 171655.77 00:16:57.183 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme5n1 : 1.27 290.75 18.17 0.00 0.00 203378.14 11505.21 198841.08 00:16:57.183 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme6n1 : 1.27 302.79 18.92 0.00 0.00 192651.57 15049.01 154567.87 00:16:57.183 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme7n1 : 1.27 302.19 18.89 0.00 0.00 189748.46 16796.63 139033.41 00:16:57.183 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme8n1 : 1.27 301.56 18.85 0.00 0.00 187205.91 18350.08 122722.23 00:16:57.183 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme9n1 : 1.28 300.93 18.81 0.00 0.00 184503.06 19903.53 135149.80 00:16:57.183 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:57.183 Verification LBA range: start 0x0 length 0x400 00:16:57.183 Nvme10n1 : 1.28 250.25 15.64 0.00 0.00 217828.62 13495.56 254765.13 00:16:57.183 =================================================================================================================== 00:16:57.183 Total : 2885.27 180.33 0.00 0.00 203508.31 5388.52 254765.13 00:16:57.440 13:46:00 -- target/shutdown.sh@113 -- # sleep 1 00:16:58.810 13:46:01 -- target/shutdown.sh@114 -- # kill -0 1159029 00:16:58.810 13:46:01 -- target/shutdown.sh@116 -- # stoptarget 00:16:58.810 13:46:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:58.810 13:46:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:58.810 13:46:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:58.810 13:46:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:58.810 13:46:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:58.810 13:46:01 -- nvmf/common.sh@117 -- # sync 00:16:58.810 13:46:01 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:58.810 13:46:01 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:58.810 13:46:01 -- nvmf/common.sh@120 -- # set +e 00:16:58.810 13:46:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.810 13:46:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:58.810 rmmod nvme_rdma 00:16:58.810 rmmod nvme_fabrics 00:16:58.810 13:46:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.810 13:46:01 -- nvmf/common.sh@124 -- # set -e 00:16:58.810 13:46:01 -- nvmf/common.sh@125 -- # return 0 00:16:58.810 13:46:01 -- nvmf/common.sh@478 -- # '[' -n 1159029 ']' 00:16:58.810 13:46:01 -- nvmf/common.sh@479 -- # killprocess 1159029 00:16:58.810 13:46:01 -- common/autotest_common.sh@936 -- # '[' -z 1159029 ']' 00:16:58.810 13:46:01 -- common/autotest_common.sh@940 -- # kill -0 1159029 00:16:58.810 13:46:01 -- common/autotest_common.sh@941 -- # uname 00:16:58.810 13:46:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.810 13:46:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1159029 00:16:58.810 13:46:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:58.810 13:46:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:58.810 13:46:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1159029' 00:16:58.810 killing process with pid 1159029 00:16:58.810 13:46:01 -- common/autotest_common.sh@955 -- # kill 1159029 00:16:58.810 13:46:01 -- common/autotest_common.sh@960 -- # wait 1159029 00:16:59.375 13:46:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:59.375 13:46:01 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:59.375 00:16:59.375 real 0m5.969s 00:16:59.375 user 0m24.221s 00:16:59.375 sys 0m1.114s 00:16:59.375 13:46:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:59.375 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:59.375 ************************************ 00:16:59.375 END TEST nvmf_shutdown_tc2 00:16:59.375 ************************************ 00:16:59.375 13:46:02 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:59.375 13:46:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:59.375 13:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.375 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.375 ************************************ 00:16:59.375 START TEST nvmf_shutdown_tc3 00:16:59.375 ************************************ 00:16:59.375 13:46:02 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:59.375 13:46:02 -- target/shutdown.sh@121 -- # starttarget 00:16:59.375 13:46:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:59.375 13:46:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:59.375 13:46:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.375 13:46:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:59.375 13:46:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:59.375 13:46:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:59.375 13:46:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.375 13:46:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.375 13:46:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.375 13:46:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:59.375 13:46:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.375 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.375 13:46:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.375 13:46:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.375 13:46:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.375 13:46:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.375 13:46:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.375 13:46:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.375 13:46:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.375 13:46:02 -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.375 13:46:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.375 13:46:02 -- nvmf/common.sh@296 -- # e810=() 00:16:59.375 13:46:02 -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.375 13:46:02 -- nvmf/common.sh@297 -- # x722=() 00:16:59.375 13:46:02 -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.375 13:46:02 -- nvmf/common.sh@298 -- # mlx=() 00:16:59.375 13:46:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.375 13:46:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.375 13:46:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.375 13:46:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.375 13:46:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:59.375 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:59.375 13:46:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.375 13:46:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.375 13:46:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:59.375 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:59.375 13:46:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.375 13:46:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.375 13:46:02 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.375 13:46:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.375 13:46:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.375 13:46:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.375 13:46:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:59.375 Found net devices under 0000:81:00.0: mlx_0_0 00:16:59.375 13:46:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.375 13:46:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.375 13:46:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.375 13:46:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.375 13:46:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:59.375 Found net devices under 0000:81:00.1: mlx_0_1 00:16:59.375 13:46:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.375 13:46:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:59.375 13:46:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:59.375 13:46:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:59.375 13:46:02 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:59.375 13:46:02 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:59.375 13:46:02 -- nvmf/common.sh@58 -- # uname 00:16:59.375 13:46:02 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:59.375 13:46:02 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:59.375 13:46:02 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:59.375 13:46:02 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:59.375 13:46:02 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:59.375 13:46:02 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:59.375 13:46:02 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:59.375 13:46:02 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:59.633 13:46:02 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:59.633 13:46:02 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:59.633 13:46:02 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:59.633 13:46:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:59.633 13:46:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:59.633 13:46:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:59.633 13:46:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:59.633 13:46:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:59.633 13:46:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@105 -- # continue 2 00:16:59.633 13:46:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@105 -- # continue 2 00:16:59.633 13:46:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:59.633 13:46:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:59.633 13:46:02 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:59.633 13:46:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:59.633 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:59.633 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:59.633 altname enp129s0f0np0 00:16:59.633 inet 192.168.100.8/24 scope global mlx_0_0 00:16:59.633 valid_lft forever preferred_lft forever 00:16:59.633 13:46:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:59.633 13:46:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:59.633 13:46:02 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:59.633 13:46:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:59.633 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:59.633 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:59.633 altname enp129s0f1np1 00:16:59.633 inet 192.168.100.9/24 scope global mlx_0_1 00:16:59.633 valid_lft forever preferred_lft forever 00:16:59.633 13:46:02 -- nvmf/common.sh@411 -- # return 0 00:16:59.633 13:46:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:59.633 13:46:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:59.633 13:46:02 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:59.633 13:46:02 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:59.633 13:46:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:59.633 13:46:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:59.633 13:46:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:59.633 13:46:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:59.633 13:46:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:59.633 13:46:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@105 -- # continue 2 00:16:59.633 13:46:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.633 13:46:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:59.633 13:46:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@105 -- # continue 2 00:16:59.633 13:46:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:59.633 13:46:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:59.633 13:46:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:59.633 13:46:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:59.633 13:46:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:59.633 13:46:02 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:59.633 192.168.100.9' 00:16:59.633 13:46:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:59.633 192.168.100.9' 00:16:59.633 13:46:02 -- nvmf/common.sh@446 -- # head -n 1 00:16:59.633 13:46:02 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:59.633 13:46:02 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:59.633 192.168.100.9' 00:16:59.633 13:46:02 -- nvmf/common.sh@447 -- # tail -n +2 00:16:59.633 13:46:02 -- nvmf/common.sh@447 -- # head -n 1 00:16:59.633 13:46:02 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:59.633 13:46:02 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:59.633 13:46:02 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:59.633 13:46:02 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:59.633 13:46:02 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:59.633 13:46:02 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:59.633 13:46:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:59.633 13:46:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:59.633 13:46:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:59.633 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.633 13:46:02 -- nvmf/common.sh@470 -- # nvmfpid=1159866 00:16:59.633 13:46:02 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:59.633 13:46:02 -- nvmf/common.sh@471 -- # waitforlisten 1159866 00:16:59.633 13:46:02 -- common/autotest_common.sh@817 -- # '[' -z 1159866 ']' 00:16:59.634 13:46:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.634 13:46:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.634 13:46:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.634 13:46:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.634 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.634 [2024-04-18 13:46:02.332604] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:59.634 [2024-04-18 13:46:02.332699] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.634 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.634 [2024-04-18 13:46:02.422316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.890 [2024-04-18 13:46:02.559656] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.890 [2024-04-18 13:46:02.559717] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.890 [2024-04-18 13:46:02.559733] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.890 [2024-04-18 13:46:02.559746] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.890 [2024-04-18 13:46:02.559759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.890 [2024-04-18 13:46:02.562961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.890 [2024-04-18 13:46:02.563022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.890 [2024-04-18 13:46:02.563074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:59.890 [2024-04-18 13:46:02.563079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.147 13:46:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.147 13:46:02 -- common/autotest_common.sh@850 -- # return 0 00:17:00.147 13:46:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:00.147 13:46:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:00.147 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:17:00.147 13:46:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.147 13:46:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:00.147 13:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.147 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:17:00.147 [2024-04-18 13:46:02.761489] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x904350/0x908840) succeed. 00:17:00.147 [2024-04-18 13:46:02.773912] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x905940/0x949ed0) succeed. 00:17:00.147 13:46:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.147 13:46:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:00.147 13:46:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:00.147 13:46:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:00.147 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:17:00.147 13:46:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:00.147 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.147 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.147 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.147 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:00.404 13:46:02 -- target/shutdown.sh@28 -- # cat 00:17:00.404 13:46:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:00.404 13:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.404 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:17:00.404 Malloc1 00:17:00.404 [2024-04-18 13:46:03.030395] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:00.404 Malloc2 00:17:00.404 Malloc3 00:17:00.404 Malloc4 00:17:00.661 Malloc5 00:17:00.661 Malloc6 00:17:00.661 Malloc7 00:17:00.661 Malloc8 00:17:00.661 Malloc9 00:17:00.918 Malloc10 00:17:00.918 13:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.918 13:46:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:00.918 13:46:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:00.918 13:46:03 -- common/autotest_common.sh@10 -- # set +x 00:17:00.918 13:46:03 -- target/shutdown.sh@125 -- # perfpid=1160055 00:17:00.918 13:46:03 -- target/shutdown.sh@126 -- # waitforlisten 1160055 /var/tmp/bdevperf.sock 00:17:00.918 13:46:03 -- common/autotest_common.sh@817 -- # '[' -z 1160055 ']' 00:17:00.918 13:46:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.918 13:46:03 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:00.918 13:46:03 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:00.918 13:46:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:00.918 13:46:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.918 13:46:03 -- nvmf/common.sh@521 -- # config=() 00:17:00.918 13:46:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:00.918 13:46:03 -- nvmf/common.sh@521 -- # local subsystem config 00:17:00.918 13:46:03 -- common/autotest_common.sh@10 -- # set +x 00:17:00.918 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.918 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.918 { 00:17:00.918 "params": { 00:17:00.918 "name": "Nvme$subsystem", 00:17:00.918 "trtype": "$TEST_TRANSPORT", 00:17:00.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.918 "adrfam": "ipv4", 00:17:00.918 "trsvcid": "$NVMF_PORT", 00:17:00.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.918 "hdgst": ${hdgst:-false}, 00:17:00.918 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:00.919 { 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme$subsystem", 00:17:00.919 "trtype": "$TEST_TRANSPORT", 00:17:00.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "$NVMF_PORT", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.919 "hdgst": ${hdgst:-false}, 00:17:00.919 "ddgst": ${ddgst:-false} 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 } 00:17:00.919 EOF 00:17:00.919 )") 00:17:00.919 13:46:03 -- nvmf/common.sh@543 -- # cat 00:17:00.919 13:46:03 -- nvmf/common.sh@545 -- # jq . 00:17:00.919 13:46:03 -- nvmf/common.sh@546 -- # IFS=, 00:17:00.919 13:46:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme1", 00:17:00.919 "trtype": "rdma", 00:17:00.919 "traddr": "192.168.100.8", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "4420", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.919 "hdgst": false, 00:17:00.919 "ddgst": false 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 },{ 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme2", 00:17:00.919 "trtype": "rdma", 00:17:00.919 "traddr": "192.168.100.8", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "4420", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:00.919 "hdgst": false, 00:17:00.919 "ddgst": false 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 },{ 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme3", 00:17:00.919 "trtype": "rdma", 00:17:00.919 "traddr": "192.168.100.8", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "4420", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:00.919 "hdgst": false, 00:17:00.919 "ddgst": false 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 },{ 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme4", 00:17:00.919 "trtype": "rdma", 00:17:00.919 "traddr": "192.168.100.8", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "4420", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:00.919 "hdgst": false, 00:17:00.919 "ddgst": false 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 },{ 00:17:00.919 "params": { 00:17:00.919 "name": "Nvme5", 00:17:00.919 "trtype": "rdma", 00:17:00.919 "traddr": "192.168.100.8", 00:17:00.919 "adrfam": "ipv4", 00:17:00.919 "trsvcid": "4420", 00:17:00.919 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:00.919 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:00.919 "hdgst": false, 00:17:00.919 "ddgst": false 00:17:00.919 }, 00:17:00.919 "method": "bdev_nvme_attach_controller" 00:17:00.919 },{ 00:17:00.920 "params": { 00:17:00.920 "name": "Nvme6", 00:17:00.920 "trtype": "rdma", 00:17:00.920 "traddr": "192.168.100.8", 00:17:00.920 "adrfam": "ipv4", 00:17:00.920 "trsvcid": "4420", 00:17:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:00.920 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:00.920 "hdgst": false, 00:17:00.920 "ddgst": false 00:17:00.920 }, 00:17:00.920 "method": "bdev_nvme_attach_controller" 00:17:00.920 },{ 00:17:00.920 "params": { 00:17:00.920 "name": "Nvme7", 00:17:00.920 "trtype": "rdma", 00:17:00.920 "traddr": "192.168.100.8", 00:17:00.920 "adrfam": "ipv4", 00:17:00.920 "trsvcid": "4420", 00:17:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:00.920 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:00.920 "hdgst": false, 00:17:00.920 "ddgst": false 00:17:00.920 }, 00:17:00.920 "method": "bdev_nvme_attach_controller" 00:17:00.920 },{ 00:17:00.920 "params": { 00:17:00.920 "name": "Nvme8", 00:17:00.920 "trtype": "rdma", 00:17:00.920 "traddr": "192.168.100.8", 00:17:00.920 "adrfam": "ipv4", 00:17:00.920 "trsvcid": "4420", 00:17:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:00.920 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:00.920 "hdgst": false, 00:17:00.920 "ddgst": false 00:17:00.920 }, 00:17:00.920 "method": "bdev_nvme_attach_controller" 00:17:00.920 },{ 00:17:00.920 "params": { 00:17:00.920 "name": "Nvme9", 00:17:00.920 "trtype": "rdma", 00:17:00.920 "traddr": "192.168.100.8", 00:17:00.920 "adrfam": "ipv4", 00:17:00.920 "trsvcid": "4420", 00:17:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:00.920 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:00.920 "hdgst": false, 00:17:00.920 "ddgst": false 00:17:00.920 }, 00:17:00.920 "method": "bdev_nvme_attach_controller" 00:17:00.920 },{ 00:17:00.920 "params": { 00:17:00.920 "name": "Nvme10", 00:17:00.920 "trtype": "rdma", 00:17:00.920 "traddr": "192.168.100.8", 00:17:00.920 "adrfam": "ipv4", 00:17:00.920 "trsvcid": "4420", 00:17:00.920 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:00.920 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:00.920 "hdgst": false, 00:17:00.920 "ddgst": false 00:17:00.920 }, 00:17:00.920 "method": "bdev_nvme_attach_controller" 00:17:00.920 }' 00:17:00.920 [2024-04-18 13:46:03.575075] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:00.920 [2024-04-18 13:46:03.575173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160055 ] 00:17:00.920 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.920 [2024-04-18 13:46:03.663935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.177 [2024-04-18 13:46:03.784063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.110 Running I/O for 10 seconds... 00:17:02.110 13:46:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:02.110 13:46:04 -- common/autotest_common.sh@850 -- # return 0 00:17:02.110 13:46:04 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:02.110 13:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.110 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 13:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.110 13:46:04 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.110 13:46:04 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:02.110 13:46:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:02.110 13:46:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:02.110 13:46:04 -- target/shutdown.sh@57 -- # local ret=1 00:17:02.110 13:46:04 -- target/shutdown.sh@58 -- # local i 00:17:02.110 13:46:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:02.110 13:46:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:02.110 13:46:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:02.110 13:46:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.110 13:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.110 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:02.367 13:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.367 13:46:04 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:02.367 13:46:04 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:02.367 13:46:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:02.624 13:46:05 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:02.624 13:46:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:02.624 13:46:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:02.624 13:46:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.624 13:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.624 13:46:05 -- common/autotest_common.sh@10 -- # set +x 00:17:02.624 13:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.624 13:46:05 -- target/shutdown.sh@60 -- # read_io_count=99 00:17:02.624 13:46:05 -- target/shutdown.sh@63 -- # '[' 99 -ge 100 ']' 00:17:02.624 13:46:05 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:02.881 13:46:05 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:02.881 13:46:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:02.881 13:46:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:02.881 13:46:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.881 13:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.881 13:46:05 -- common/autotest_common.sh@10 -- # set +x 00:17:03.139 13:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.139 13:46:05 -- target/shutdown.sh@60 -- # read_io_count=227 00:17:03.139 13:46:05 -- target/shutdown.sh@63 -- # '[' 227 -ge 100 ']' 00:17:03.139 13:46:05 -- target/shutdown.sh@64 -- # ret=0 00:17:03.139 13:46:05 -- target/shutdown.sh@65 -- # break 00:17:03.139 13:46:05 -- target/shutdown.sh@69 -- # return 0 00:17:03.139 13:46:05 -- target/shutdown.sh@135 -- # killprocess 1159866 00:17:03.139 13:46:05 -- common/autotest_common.sh@936 -- # '[' -z 1159866 ']' 00:17:03.139 13:46:05 -- common/autotest_common.sh@940 -- # kill -0 1159866 00:17:03.139 13:46:05 -- common/autotest_common.sh@941 -- # uname 00:17:03.139 13:46:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.139 13:46:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1159866 00:17:03.139 13:46:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:03.139 13:46:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:03.139 13:46:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1159866' 00:17:03.139 killing process with pid 1159866 00:17:03.139 13:46:05 -- common/autotest_common.sh@955 -- # kill 1159866 00:17:03.139 13:46:05 -- common/autotest_common.sh@960 -- # wait 1159866 00:17:04.090 13:46:06 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:04.090 13:46:06 -- target/shutdown.sh@139 -- # sleep 1 00:17:04.356 [2024-04-18 13:46:06.960514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.960569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.960590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.960605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.960620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.960635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.960650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.960665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.963043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.356 [2024-04-18 13:46:06.963079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:04.356 [2024-04-18 13:46:06.963114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.963151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.963166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.963182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.963205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.963221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.963235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.965873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.356 [2024-04-18 13:46:06.965910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:04.356 [2024-04-18 13:46:06.965951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.965974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.966000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.966023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.966039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.966053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.966068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.356 [2024-04-18 13:46:06.966082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.356 [2024-04-18 13:46:06.968341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.968377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:04.357 [2024-04-18 13:46:06.968408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.968428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.968444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.968458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.968473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.968487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.968502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.968515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.970540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.970577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:04.357 [2024-04-18 13:46:06.970609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.970630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.970646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.970663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.970679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.970693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.970710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.970725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.973005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.973040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:04.357 [2024-04-18 13:46:06.973072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.973092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.973108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.973137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.973151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.973166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.973179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.975025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.975052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:04.357 [2024-04-18 13:46:06.975080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.975099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.975115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.975129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.975144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.975175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.975189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.977042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.977077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:04.357 [2024-04-18 13:46:06.977108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.977128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.977144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.977159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.977180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.977196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.977210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.977225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.979297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.979332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:04.357 [2024-04-18 13:46:06.979363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.979384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.979401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.979415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.979430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.979444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.979459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.979473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.981851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.981886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:04.357 [2024-04-18 13:46:06.981917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.981947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.981965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.981980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.981995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.982009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.982024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.357 [2024-04-18 13:46:06.982038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54424 cdw0:0 sqhd:7800 p:0 m:0 dnr:0 00:17:04.357 [2024-04-18 13:46:06.984267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.357 [2024-04-18 13:46:06.984294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:04.357 [2024-04-18 13:46:06.987027] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f00 was disconnected and freed. reset controller. 00:17:04.357 [2024-04-18 13:46:06.987063] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.357 [2024-04-18 13:46:06.989745] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:17:04.357 [2024-04-18 13:46:06.989780] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.357 [2024-04-18 13:46:06.991917] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a80 was disconnected and freed. reset controller. 00:17:04.357 [2024-04-18 13:46:06.991960] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.357 [2024-04-18 13:46:06.994002] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256840 was disconnected and freed. reset controller. 00:17:04.357 [2024-04-18 13:46:06.994029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.357 [2024-04-18 13:46:06.994113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182100 00:17:04.358 [2024-04-18 13:46:06.994137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182100 00:17:04.358 [2024-04-18 13:46:06.994185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019df0000 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.994916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.994989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182900 00:17:04.358 [2024-04-18 13:46:06.995504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182a00 00:17:04.358 [2024-04-18 13:46:06.995543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182a00 00:17:04.358 [2024-04-18 13:46:06.995584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.358 [2024-04-18 13:46:06.995606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182a00 00:17:04.358 [2024-04-18 13:46:06.995624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.995960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.995983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:06.996403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x182100 00:17:04.359 [2024-04-18 13:46:06.996442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11a000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f9000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x182700 00:17:04.359 [2024-04-18 13:46:06.996790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32735 cdw0:98629410 sqhd:df24 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:06.999892] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256600 was disconnected and freed. reset controller. 00:17:04.359 [2024-04-18 13:46:06.999926] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.359 [2024-04-18 13:46:06.999964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:06.999984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:07.000033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:07.000073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:07.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:07.000152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x182b00 00:17:04.359 [2024-04-18 13:46:07.000192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.359 [2024-04-18 13:46:07.000214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182a00 00:17:04.359 [2024-04-18 13:46:07.000232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182a00 00:17:04.360 [2024-04-18 13:46:07.000556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.000936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.000966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x182e00 00:17:04.360 [2024-04-18 13:46:07.001654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.360 [2024-04-18 13:46:07.001675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x182e00 00:17:04.361 [2024-04-18 13:46:07.001694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x182e00 00:17:04.361 [2024-04-18 13:46:07.001737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x182e00 00:17:04.361 [2024-04-18 13:46:07.001778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x182e00 00:17:04.361 [2024-04-18 13:46:07.001818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.001857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.001897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.001946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.001972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.001991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.002031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.002070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.002109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x182c00 00:17:04.361 [2024-04-18 13:46:07.002188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x182b00 00:17:04.361 [2024-04-18 13:46:07.002237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f59d000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f57c000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55b000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53a000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f519000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.002553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x182700 00:17:04.361 [2024-04-18 13:46:07.002571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:3010 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.005850] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192563c0 was disconnected and freed. reset controller. 00:17:04.361 [2024-04-18 13:46:07.005888] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.361 [2024-04-18 13:46:07.005913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183200 00:17:04.361 [2024-04-18 13:46:07.005932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.005983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183200 00:17:04.361 [2024-04-18 13:46:07.006011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.006036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183200 00:17:04.361 [2024-04-18 13:46:07.006055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.006077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183200 00:17:04.361 [2024-04-18 13:46:07.006095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.006117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183500 00:17:04.361 [2024-04-18 13:46:07.006135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.361 [2024-04-18 13:46:07.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:17:04.361 [2024-04-18 13:46:07.006174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.006958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.006982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:17:04.362 [2024-04-18 13:46:07.007198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x182c00 00:17:04.362 [2024-04-18 13:46:07.007237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d30000 len:0x10000 key:0x182700 00:17:04.362 [2024-04-18 13:46:07.007276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x182700 00:17:04.362 [2024-04-18 13:46:07.007318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.362 [2024-04-18 13:46:07.007342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e17000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e38000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e59000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012657000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.007971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.007997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012636000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012615000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125f4000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125d3000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d05c000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03b000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01a000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cff9000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.008574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfd8000 len:0x10000 key:0x182700 00:17:04.363 [2024-04-18 13:46:07.008592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:f680 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.011814] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:17:04.363 [2024-04-18 13:46:07.011849] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.363 [2024-04-18 13:46:07.011874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183000 00:17:04.363 [2024-04-18 13:46:07.011893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.011922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x183000 00:17:04.363 [2024-04-18 13:46:07.011964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.011999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183000 00:17:04.363 [2024-04-18 13:46:07.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.012040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183000 00:17:04.363 [2024-04-18 13:46:07.012058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.363 [2024-04-18 13:46:07.012087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183000 00:17:04.364 [2024-04-18 13:46:07.012106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183000 00:17:04.364 [2024-04-18 13:46:07.012146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183000 00:17:04.364 [2024-04-18 13:46:07.012184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183000 00:17:04.364 [2024-04-18 13:46:07.012224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183000 00:17:04.364 [2024-04-18 13:46:07.012270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183500 00:17:04.364 [2024-04-18 13:46:07.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183500 00:17:04.364 [2024-04-18 13:46:07.012348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183500 00:17:04.364 [2024-04-18 13:46:07.012387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183500 00:17:04.364 [2024-04-18 13:46:07.012427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.012968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.012991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.364 [2024-04-18 13:46:07.013513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183900 00:17:04.364 [2024-04-18 13:46:07.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183900 00:17:04.365 [2024-04-18 13:46:07.013569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183900 00:17:04.365 [2024-04-18 13:46:07.013608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183900 00:17:04.365 [2024-04-18 13:46:07.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183900 00:17:04.365 [2024-04-18 13:46:07.013688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.013977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.013995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183b00 00:17:04.365 [2024-04-18 13:46:07.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183000 00:17:04.365 [2024-04-18 13:46:07.014310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x182700 00:17:04.365 [2024-04-18 13:46:07.014349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x182700 00:17:04.365 [2024-04-18 13:46:07.014396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x182700 00:17:04.365 [2024-04-18 13:46:07.014437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x182700 00:17:04.365 [2024-04-18 13:46:07.014479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.014513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x182700 00:17:04.365 [2024-04-18 13:46:07.014530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:4130 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.017768] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:17:04.365 [2024-04-18 13:46:07.017802] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.365 [2024-04-18 13:46:07.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.017846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.017875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.017918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.017935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.017971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.017994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.018016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.018034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.365 [2024-04-18 13:46:07.018056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183100 00:17:04.365 [2024-04-18 13:46:07.018073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.018970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.018992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.019011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.019056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183100 00:17:04.366 [2024-04-18 13:46:07.019097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183a00 00:17:04.366 [2024-04-18 13:46:07.019451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.366 [2024-04-18 13:46:07.019472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.019975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.019993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183a00 00:17:04.367 [2024-04-18 13:46:07.020344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.020387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.020410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183b00 00:17:04.367 [2024-04-18 13:46:07.020427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:6010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023332] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:17:04.367 [2024-04-18 13:46:07.023368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.367 [2024-04-18 13:46:07.023393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.367 [2024-04-18 13:46:07.023864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183600 00:17:04.367 [2024-04-18 13:46:07.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.023903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183600 00:17:04.368 [2024-04-18 13:46:07.023920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.023952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.023972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.023994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.024924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.024991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183800 00:17:04.368 [2024-04-18 13:46:07.025240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183f00 00:17:04.368 [2024-04-18 13:46:07.025279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183f00 00:17:04.368 [2024-04-18 13:46:07.025318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183f00 00:17:04.368 [2024-04-18 13:46:07.025357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.368 [2024-04-18 13:46:07.025379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.025964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183f00 00:17:04.369 [2024-04-18 13:46:07.026016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.026037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183600 00:17:04.369 [2024-04-18 13:46:07.026055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:17:04.369 [2024-04-18 13:46:07.046589] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806780 was disconnected and freed. reset controller. 00:17:04.369 [2024-04-18 13:46:07.046617] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046700] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046734] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046755] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046774] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046792] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046811] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046829] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046847] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046866] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.046884] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:04.369 [2024-04-18 13:46:07.053319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.369 [2024-04-18 13:46:07.053358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:04.369 [2024-04-18 13:46:07.053378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:04.369 [2024-04-18 13:46:07.053395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:04.369 task offset: 45056 on job bdev=Nvme1n1 fails 00:17:04.369 00:17:04.369 Latency(us) 00:17:04.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.369 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme1n1 ended in about 2.35 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme1n1 : 2.35 136.17 8.51 27.23 0.00 388376.15 41554.68 1062557.01 00:17:04.369 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme2n1 ended in about 2.35 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme2n1 : 2.35 136.95 8.56 27.22 0.00 382987.05 7330.32 1062557.01 00:17:04.369 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme3n1 ended in about 2.35 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme3n1 : 2.35 136.04 8.50 27.21 0.00 381857.44 55535.69 1062557.01 00:17:04.369 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme4n1 ended in about 2.35 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme4n1 : 2.35 149.57 9.35 27.19 0.00 349590.86 7767.23 1056343.23 00:17:04.369 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme5n1 ended in about 2.30 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme5n1 : 2.30 139.14 8.70 27.83 0.00 367839.51 14660.65 1149549.99 00:17:04.369 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme6n1 ended in about 2.31 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme6n1 : 2.31 138.79 8.67 27.76 0.00 365556.12 20000.62 1137122.42 00:17:04.369 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme7n1 ended in about 2.31 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme7n1 : 2.31 138.43 8.65 27.69 0.00 363360.96 22524.97 1124694.85 00:17:04.369 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme8n1 ended in about 2.32 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme8n1 : 2.32 138.08 8.63 27.62 0.00 361071.25 33010.73 1118481.07 00:17:04.369 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme9n1 ended in about 2.32 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme9n1 : 2.32 137.73 8.61 27.55 0.00 358805.55 67574.90 1106053.50 00:17:04.369 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.369 Job: Nvme10n1 ended in about 2.33 seconds with error 00:17:04.369 Verification LBA range: start 0x0 length 0x400 00:17:04.369 Nvme10n1 : 2.33 109.92 6.87 27.48 0.00 427635.75 67574.90 1087412.15 00:17:04.369 =================================================================================================================== 00:17:04.369 Total : 1360.83 85.05 274.77 0.00 373612.38 7330.32 1149549.99 00:17:04.369 [2024-04-18 13:46:07.081614] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:04.369 [2024-04-18 13:46:07.083093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:04.369 [2024-04-18 13:46:07.083131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:04.369 [2024-04-18 13:46:07.083153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:04.370 [2024-04-18 13:46:07.083170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:04.370 [2024-04-18 13:46:07.083188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:04.370 [2024-04-18 13:46:07.083206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:04.370 [2024-04-18 13:46:07.098052] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098089] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098104] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:04.370 [2024-04-18 13:46:07.098214] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098253] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098266] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5380 00:17:04.370 [2024-04-18 13:46:07.098369] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098409] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098421] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba540 00:17:04.370 [2024-04-18 13:46:07.098525] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098564] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098576] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c00 00:17:04.370 [2024-04-18 13:46:07.098703] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098728] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098741] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c140 00:17:04.370 [2024-04-18 13:46:07.098843] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098866] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.098879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b54c0 00:17:04.370 [2024-04-18 13:46:07.098971] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.098994] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.099006] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f280 00:17:04.370 [2024-04-18 13:46:07.099111] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.099134] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.099147] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298cc0 00:17:04.370 [2024-04-18 13:46:07.099222] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.099245] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.099258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd440 00:17:04.370 [2024-04-18 13:46:07.099358] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:04.370 [2024-04-18 13:46:07.099381] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:04.370 [2024-04-18 13:46:07.099393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:17:04.935 13:46:07 -- target/shutdown.sh@142 -- # kill -9 1160055 00:17:04.935 13:46:07 -- target/shutdown.sh@144 -- # stoptarget 00:17:04.935 13:46:07 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:04.935 13:46:07 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:04.935 13:46:07 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:04.935 13:46:07 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:04.935 13:46:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:04.935 13:46:07 -- nvmf/common.sh@117 -- # sync 00:17:04.935 13:46:07 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:04.935 13:46:07 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:04.935 13:46:07 -- nvmf/common.sh@120 -- # set +e 00:17:04.935 13:46:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.935 13:46:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:04.935 rmmod nvme_rdma 00:17:04.935 rmmod nvme_fabrics 00:17:04.935 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1160055 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:17:04.935 13:46:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.935 13:46:07 -- nvmf/common.sh@124 -- # set -e 00:17:04.935 13:46:07 -- nvmf/common.sh@125 -- # return 0 00:17:04.935 13:46:07 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:04.935 13:46:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:04.935 13:46:07 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:04.935 00:17:04.935 real 0m5.468s 00:17:04.935 user 0m19.023s 00:17:04.935 sys 0m1.330s 00:17:04.935 13:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.935 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.935 ************************************ 00:17:04.935 END TEST nvmf_shutdown_tc3 00:17:04.935 ************************************ 00:17:04.935 13:46:07 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:04.935 00:17:04.935 real 0m21.580s 00:17:04.935 user 1m13.538s 00:17:04.935 sys 0m5.963s 00:17:04.935 13:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.935 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.935 ************************************ 00:17:04.935 END TEST nvmf_shutdown 00:17:04.935 ************************************ 00:17:04.935 13:46:07 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:04.935 13:46:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.935 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.935 13:46:07 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:04.935 13:46:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:04.935 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.935 13:46:07 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:04.935 13:46:07 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:04.935 13:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:04.935 13:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.935 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.193 ************************************ 00:17:05.193 START TEST nvmf_multicontroller 00:17:05.193 ************************************ 00:17:05.193 13:46:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:05.193 * Looking for test storage... 00:17:05.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:05.193 13:46:07 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.193 13:46:07 -- nvmf/common.sh@7 -- # uname -s 00:17:05.193 13:46:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.193 13:46:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.193 13:46:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.193 13:46:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.193 13:46:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.193 13:46:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.193 13:46:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.193 13:46:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.193 13:46:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.193 13:46:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.193 13:46:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:05.193 13:46:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:05.193 13:46:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.193 13:46:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.193 13:46:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.193 13:46:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.193 13:46:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:05.193 13:46:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.193 13:46:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.193 13:46:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.193 13:46:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.193 13:46:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.193 13:46:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.193 13:46:07 -- paths/export.sh@5 -- # export PATH 00:17:05.194 13:46:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.194 13:46:07 -- nvmf/common.sh@47 -- # : 0 00:17:05.194 13:46:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.194 13:46:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.194 13:46:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.194 13:46:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.194 13:46:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.194 13:46:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.194 13:46:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.194 13:46:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.194 13:46:07 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.194 13:46:07 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.194 13:46:07 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:05.194 13:46:07 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:05.194 13:46:07 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.194 13:46:07 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:17:05.194 13:46:07 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:17:05.194 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:17:05.194 13:46:07 -- host/multicontroller.sh@20 -- # exit 0 00:17:05.194 00:17:05.194 real 0m0.087s 00:17:05.194 user 0m0.048s 00:17:05.194 sys 0m0.044s 00:17:05.194 13:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:05.194 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.194 ************************************ 00:17:05.194 END TEST nvmf_multicontroller 00:17:05.194 ************************************ 00:17:05.194 13:46:07 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:05.194 13:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.194 13:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.194 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 ************************************ 00:17:05.452 START TEST nvmf_aer 00:17:05.452 ************************************ 00:17:05.452 13:46:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:05.452 * Looking for test storage... 00:17:05.452 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:05.452 13:46:08 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.452 13:46:08 -- nvmf/common.sh@7 -- # uname -s 00:17:05.452 13:46:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.452 13:46:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.452 13:46:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.452 13:46:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.452 13:46:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.452 13:46:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.452 13:46:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.452 13:46:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.452 13:46:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.452 13:46:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.452 13:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:05.452 13:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:05.452 13:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.452 13:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.452 13:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.452 13:46:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.452 13:46:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:05.452 13:46:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.452 13:46:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.452 13:46:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.452 13:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.452 13:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.452 13:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.452 13:46:08 -- paths/export.sh@5 -- # export PATH 00:17:05.452 13:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.452 13:46:08 -- nvmf/common.sh@47 -- # : 0 00:17:05.452 13:46:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.452 13:46:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.452 13:46:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.452 13:46:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.452 13:46:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.452 13:46:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.452 13:46:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.452 13:46:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.452 13:46:08 -- host/aer.sh@11 -- # nvmftestinit 00:17:05.452 13:46:08 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:05.452 13:46:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.452 13:46:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:05.453 13:46:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:05.453 13:46:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:05.453 13:46:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.453 13:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.453 13:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.453 13:46:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:05.453 13:46:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:05.453 13:46:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.453 13:46:08 -- common/autotest_common.sh@10 -- # set +x 00:17:07.981 13:46:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.981 13:46:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.981 13:46:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.981 13:46:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.981 13:46:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.981 13:46:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.981 13:46:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.981 13:46:10 -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.981 13:46:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.981 13:46:10 -- nvmf/common.sh@296 -- # e810=() 00:17:07.981 13:46:10 -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.981 13:46:10 -- nvmf/common.sh@297 -- # x722=() 00:17:07.981 13:46:10 -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.981 13:46:10 -- nvmf/common.sh@298 -- # mlx=() 00:17:07.981 13:46:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.981 13:46:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.981 13:46:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.981 13:46:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:07.981 13:46:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:07.982 13:46:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:07.982 13:46:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:07.982 13:46:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:07.982 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:07.982 13:46:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.982 13:46:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:07.982 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:07.982 13:46:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.982 13:46:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.982 13:46:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.982 13:46:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:07.982 Found net devices under 0000:81:00.0: mlx_0_0 00:17:07.982 13:46:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.982 13:46:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.982 13:46:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.982 13:46:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:07.982 Found net devices under 0000:81:00.1: mlx_0_1 00:17:07.982 13:46:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.982 13:46:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:07.982 13:46:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:07.982 13:46:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:07.982 13:46:10 -- nvmf/common.sh@58 -- # uname 00:17:07.982 13:46:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:07.982 13:46:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:07.982 13:46:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:07.982 13:46:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:07.982 13:46:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:07.982 13:46:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:07.982 13:46:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:07.982 13:46:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:07.982 13:46:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:07.982 13:46:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:07.982 13:46:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:07.982 13:46:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.982 13:46:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:07.982 13:46:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:07.982 13:46:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.982 13:46:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:07.982 13:46:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:07.982 13:46:10 -- nvmf/common.sh@105 -- # continue 2 00:17:07.982 13:46:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.982 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:07.982 13:46:10 -- nvmf/common.sh@105 -- # continue 2 00:17:07.982 13:46:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:07.982 13:46:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:07.982 13:46:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:07.982 13:46:10 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:07.982 13:46:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:07.982 13:46:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:07.982 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:07.982 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:07.982 altname enp129s0f0np0 00:17:07.982 inet 192.168.100.8/24 scope global mlx_0_0 00:17:07.982 valid_lft forever preferred_lft forever 00:17:07.982 13:46:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:07.982 13:46:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:07.982 13:46:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:07.982 13:46:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:08.240 13:46:10 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:08.240 13:46:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:08.240 13:46:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:08.240 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:08.240 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:08.240 altname enp129s0f1np1 00:17:08.240 inet 192.168.100.9/24 scope global mlx_0_1 00:17:08.240 valid_lft forever preferred_lft forever 00:17:08.240 13:46:10 -- nvmf/common.sh@411 -- # return 0 00:17:08.240 13:46:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:08.240 13:46:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:08.240 13:46:10 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:08.240 13:46:10 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:08.240 13:46:10 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:08.240 13:46:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:08.240 13:46:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:08.240 13:46:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:08.240 13:46:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:08.240 13:46:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:08.240 13:46:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:08.240 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.240 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:08.240 13:46:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:08.240 13:46:10 -- nvmf/common.sh@105 -- # continue 2 00:17:08.240 13:46:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:08.240 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.240 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:08.240 13:46:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.240 13:46:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:08.240 13:46:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:08.240 13:46:10 -- nvmf/common.sh@105 -- # continue 2 00:17:08.240 13:46:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:08.240 13:46:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:08.240 13:46:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:08.240 13:46:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:08.240 13:46:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:08.240 13:46:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:08.240 13:46:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:08.240 13:46:10 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:08.240 192.168.100.9' 00:17:08.240 13:46:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:08.240 192.168.100.9' 00:17:08.240 13:46:10 -- nvmf/common.sh@446 -- # head -n 1 00:17:08.240 13:46:10 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:08.240 13:46:10 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:08.240 192.168.100.9' 00:17:08.240 13:46:10 -- nvmf/common.sh@447 -- # tail -n +2 00:17:08.240 13:46:10 -- nvmf/common.sh@447 -- # head -n 1 00:17:08.240 13:46:10 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:08.240 13:46:10 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:08.240 13:46:10 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:08.240 13:46:10 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:08.240 13:46:10 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:08.240 13:46:10 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:08.240 13:46:10 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:08.240 13:46:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:08.240 13:46:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:08.240 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:08.240 13:46:10 -- nvmf/common.sh@470 -- # nvmfpid=1162766 00:17:08.240 13:46:10 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:08.240 13:46:10 -- nvmf/common.sh@471 -- # waitforlisten 1162766 00:17:08.240 13:46:10 -- common/autotest_common.sh@817 -- # '[' -z 1162766 ']' 00:17:08.240 13:46:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.240 13:46:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.240 13:46:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.240 13:46:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.240 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:08.240 [2024-04-18 13:46:10.911660] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:08.240 [2024-04-18 13:46:10.911764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.240 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.240 [2024-04-18 13:46:10.997595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.498 [2024-04-18 13:46:11.125145] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.498 [2024-04-18 13:46:11.125212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.498 [2024-04-18 13:46:11.125229] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.498 [2024-04-18 13:46:11.125242] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.498 [2024-04-18 13:46:11.125254] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.498 [2024-04-18 13:46:11.125322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.498 [2024-04-18 13:46:11.125379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.498 [2024-04-18 13:46:11.125431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.498 [2024-04-18 13:46:11.125435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.498 13:46:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.498 13:46:11 -- common/autotest_common.sh@850 -- # return 0 00:17:08.498 13:46:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:08.498 13:46:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.498 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.498 13:46:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.498 13:46:11 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:08.498 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.498 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 [2024-04-18 13:46:11.322005] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb37090/0xb3b580) succeed. 00:17:08.755 [2024-04-18 13:46:11.334457] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb38680/0xb7cc10) succeed. 00:17:08.755 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.755 13:46:11 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:08.755 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.755 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 Malloc0 00:17:08.755 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.755 13:46:11 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:08.755 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.755 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.755 13:46:11 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.755 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.755 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.755 13:46:11 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:08.755 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.755 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 [2024-04-18 13:46:11.544510] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:08.755 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.755 13:46:11 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:08.755 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.755 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 [2024-04-18 13:46:11.552166] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:08.755 [ 00:17:08.755 { 00:17:08.755 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:08.755 "subtype": "Discovery", 00:17:08.755 "listen_addresses": [], 00:17:08.755 "allow_any_host": true, 00:17:08.755 "hosts": [] 00:17:08.755 }, 00:17:08.755 { 00:17:08.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.755 "subtype": "NVMe", 00:17:08.756 "listen_addresses": [ 00:17:08.756 { 00:17:08.756 "transport": "RDMA", 00:17:08.756 "trtype": "RDMA", 00:17:08.756 "adrfam": "IPv4", 00:17:08.756 "traddr": "192.168.100.8", 00:17:08.756 "trsvcid": "4420" 00:17:08.756 } 00:17:08.756 ], 00:17:08.756 "allow_any_host": true, 00:17:08.756 "hosts": [], 00:17:08.756 "serial_number": "SPDK00000000000001", 00:17:08.756 "model_number": "SPDK bdev Controller", 00:17:08.756 "max_namespaces": 2, 00:17:08.756 "min_cntlid": 1, 00:17:08.756 "max_cntlid": 65519, 00:17:08.756 "namespaces": [ 00:17:08.756 { 00:17:08.756 "nsid": 1, 00:17:08.756 "bdev_name": "Malloc0", 00:17:08.756 "name": "Malloc0", 00:17:08.756 "nguid": "05DF2192EC084616A15362F7B8FD1062", 00:17:08.756 "uuid": "05df2192-ec08-4616-a153-62f7b8fd1062" 00:17:08.756 } 00:17:09.013 ] 00:17:09.013 } 00:17:09.013 ] 00:17:09.013 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.013 13:46:11 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:09.013 13:46:11 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:09.013 13:46:11 -- host/aer.sh@33 -- # aerpid=1162802 00:17:09.013 13:46:11 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:09.013 13:46:11 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:09.013 13:46:11 -- common/autotest_common.sh@1251 -- # local i=0 00:17:09.013 13:46:11 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1254 -- # i=1 00:17:09.013 13:46:11 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:09.013 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.013 13:46:11 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1254 -- # i=2 00:17:09.013 13:46:11 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:09.013 13:46:11 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:09.013 13:46:11 -- common/autotest_common.sh@1262 -- # return 0 00:17:09.013 13:46:11 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:09.013 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.013 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.013 Malloc1 00:17:09.013 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.013 13:46:11 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:09.013 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.270 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.270 13:46:11 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:09.270 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.270 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 [ 00:17:09.270 { 00:17:09.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:09.270 "subtype": "Discovery", 00:17:09.270 "listen_addresses": [], 00:17:09.270 "allow_any_host": true, 00:17:09.270 "hosts": [] 00:17:09.270 }, 00:17:09.270 { 00:17:09.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.270 "subtype": "NVMe", 00:17:09.270 "listen_addresses": [ 00:17:09.270 { 00:17:09.270 "transport": "RDMA", 00:17:09.270 "trtype": "RDMA", 00:17:09.270 "adrfam": "IPv4", 00:17:09.270 "traddr": "192.168.100.8", 00:17:09.270 "trsvcid": "4420" 00:17:09.270 } 00:17:09.270 ], 00:17:09.270 "allow_any_host": true, 00:17:09.270 "hosts": [], 00:17:09.270 "serial_number": "SPDK00000000000001", 00:17:09.270 "model_number": "SPDK bdev Controller", 00:17:09.270 "max_namespaces": 2, 00:17:09.270 "min_cntlid": 1, 00:17:09.270 "max_cntlid": 65519, 00:17:09.270 "namespaces": [ 00:17:09.270 { 00:17:09.270 "nsid": 1, 00:17:09.270 "bdev_name": "Malloc0", 00:17:09.270 "name": "Malloc0", 00:17:09.270 "nguid": "05DF2192EC084616A15362F7B8FD1062", 00:17:09.270 "uuid": "05df2192-ec08-4616-a153-62f7b8fd1062" 00:17:09.270 }, 00:17:09.270 { 00:17:09.270 "nsid": 2, 00:17:09.270 "bdev_name": "Malloc1", 00:17:09.270 "name": "Malloc1", 00:17:09.270 "nguid": "945C3F1D85874D35878FE10BFF633DB1", 00:17:09.270 "uuid": "945c3f1d-8587-4d35-878f-e10bff633db1" 00:17:09.270 } 00:17:09.270 ] 00:17:09.270 } 00:17:09.270 ] 00:17:09.270 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.270 13:46:11 -- host/aer.sh@43 -- # wait 1162802 00:17:09.270 Asynchronous Event Request test 00:17:09.270 Attaching to 192.168.100.8 00:17:09.270 Attached to 192.168.100.8 00:17:09.270 Registering asynchronous event callbacks... 00:17:09.270 Starting namespace attribute notice tests for all controllers... 00:17:09.270 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:09.270 aer_cb - Changed Namespace 00:17:09.270 Cleaning up... 00:17:09.270 13:46:11 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:09.270 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.270 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.270 13:46:11 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:09.270 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.270 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.270 13:46:11 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.270 13:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.270 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.270 13:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.270 13:46:11 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:09.270 13:46:11 -- host/aer.sh@51 -- # nvmftestfini 00:17:09.270 13:46:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:09.270 13:46:11 -- nvmf/common.sh@117 -- # sync 00:17:09.270 13:46:11 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:09.270 13:46:11 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:09.270 13:46:11 -- nvmf/common.sh@120 -- # set +e 00:17:09.270 13:46:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.270 13:46:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:09.270 rmmod nvme_rdma 00:17:09.270 rmmod nvme_fabrics 00:17:09.270 13:46:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.270 13:46:11 -- nvmf/common.sh@124 -- # set -e 00:17:09.270 13:46:11 -- nvmf/common.sh@125 -- # return 0 00:17:09.270 13:46:11 -- nvmf/common.sh@478 -- # '[' -n 1162766 ']' 00:17:09.270 13:46:11 -- nvmf/common.sh@479 -- # killprocess 1162766 00:17:09.270 13:46:11 -- common/autotest_common.sh@936 -- # '[' -z 1162766 ']' 00:17:09.270 13:46:11 -- common/autotest_common.sh@940 -- # kill -0 1162766 00:17:09.270 13:46:11 -- common/autotest_common.sh@941 -- # uname 00:17:09.270 13:46:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.270 13:46:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1162766 00:17:09.270 13:46:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.270 13:46:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.270 13:46:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1162766' 00:17:09.270 killing process with pid 1162766 00:17:09.270 13:46:12 -- common/autotest_common.sh@955 -- # kill 1162766 00:17:09.270 [2024-04-18 13:46:12.025336] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:09.270 13:46:12 -- common/autotest_common.sh@960 -- # wait 1162766 00:17:09.906 13:46:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:09.906 13:46:12 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:09.906 00:17:09.906 real 0m4.384s 00:17:09.906 user 0m5.730s 00:17:09.906 sys 0m2.458s 00:17:09.906 13:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:09.906 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:09.906 ************************************ 00:17:09.906 END TEST nvmf_aer 00:17:09.906 ************************************ 00:17:09.906 13:46:12 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:09.906 13:46:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:09.906 13:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.906 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:09.906 ************************************ 00:17:09.906 START TEST nvmf_async_init 00:17:09.906 ************************************ 00:17:09.906 13:46:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:09.906 * Looking for test storage... 00:17:09.906 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:09.906 13:46:12 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.906 13:46:12 -- nvmf/common.sh@7 -- # uname -s 00:17:09.906 13:46:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.906 13:46:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.906 13:46:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.906 13:46:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.906 13:46:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.906 13:46:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.906 13:46:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.906 13:46:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.906 13:46:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.906 13:46:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.906 13:46:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:09.906 13:46:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:09.906 13:46:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.906 13:46:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.906 13:46:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.906 13:46:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.906 13:46:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:09.906 13:46:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.906 13:46:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.906 13:46:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.906 13:46:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.906 13:46:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.906 13:46:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.906 13:46:12 -- paths/export.sh@5 -- # export PATH 00:17:09.906 13:46:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.906 13:46:12 -- nvmf/common.sh@47 -- # : 0 00:17:09.906 13:46:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.906 13:46:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.906 13:46:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.906 13:46:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.906 13:46:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.906 13:46:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.906 13:46:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.906 13:46:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.906 13:46:12 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:09.906 13:46:12 -- host/async_init.sh@14 -- # null_block_size=512 00:17:09.906 13:46:12 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:09.906 13:46:12 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:09.906 13:46:12 -- host/async_init.sh@20 -- # uuidgen 00:17:09.906 13:46:12 -- host/async_init.sh@20 -- # tr -d - 00:17:09.906 13:46:12 -- host/async_init.sh@20 -- # nguid=ad51fbf146b64385821ea93887f912f6 00:17:09.906 13:46:12 -- host/async_init.sh@22 -- # nvmftestinit 00:17:09.906 13:46:12 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:09.906 13:46:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.906 13:46:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:09.906 13:46:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:09.906 13:46:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:09.906 13:46:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.906 13:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.906 13:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.906 13:46:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:09.906 13:46:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:09.906 13:46:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.906 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:13.189 13:46:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:13.189 13:46:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.189 13:46:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.189 13:46:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.189 13:46:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.189 13:46:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.189 13:46:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.189 13:46:15 -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.189 13:46:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.189 13:46:15 -- nvmf/common.sh@296 -- # e810=() 00:17:13.189 13:46:15 -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.189 13:46:15 -- nvmf/common.sh@297 -- # x722=() 00:17:13.189 13:46:15 -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.189 13:46:15 -- nvmf/common.sh@298 -- # mlx=() 00:17:13.189 13:46:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.189 13:46:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.189 13:46:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.189 13:46:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:13.189 13:46:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:13.189 13:46:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:13.189 13:46:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:13.189 13:46:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:13.189 13:46:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.189 13:46:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.189 13:46:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:13.189 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:13.189 13:46:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:13.189 13:46:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:13.190 13:46:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:13.190 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:13.190 13:46:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:13.190 13:46:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.190 13:46:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.190 13:46:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:13.190 Found net devices under 0000:81:00.0: mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.190 13:46:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.190 13:46:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.190 13:46:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:13.190 Found net devices under 0000:81:00.1: mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.190 13:46:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:13.190 13:46:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:13.190 13:46:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:13.190 13:46:15 -- nvmf/common.sh@58 -- # uname 00:17:13.190 13:46:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:13.190 13:46:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:13.190 13:46:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:13.190 13:46:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:13.190 13:46:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:13.190 13:46:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:13.190 13:46:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:13.190 13:46:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:13.190 13:46:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:13.190 13:46:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:13.190 13:46:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:13.190 13:46:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:13.190 13:46:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:13.190 13:46:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:13.190 13:46:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:13.190 13:46:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@105 -- # continue 2 00:17:13.190 13:46:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@105 -- # continue 2 00:17:13.190 13:46:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:13.190 13:46:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.190 13:46:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:13.190 13:46:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:13.190 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:13.190 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:13.190 altname enp129s0f0np0 00:17:13.190 inet 192.168.100.8/24 scope global mlx_0_0 00:17:13.190 valid_lft forever preferred_lft forever 00:17:13.190 13:46:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:13.190 13:46:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.190 13:46:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:13.190 13:46:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:13.190 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:13.190 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:13.190 altname enp129s0f1np1 00:17:13.190 inet 192.168.100.9/24 scope global mlx_0_1 00:17:13.190 valid_lft forever preferred_lft forever 00:17:13.190 13:46:15 -- nvmf/common.sh@411 -- # return 0 00:17:13.190 13:46:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:13.190 13:46:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:13.190 13:46:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:13.190 13:46:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:13.190 13:46:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:13.190 13:46:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:13.190 13:46:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:13.190 13:46:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:13.190 13:46:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:13.190 13:46:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@105 -- # continue 2 00:17:13.190 13:46:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.190 13:46:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:13.190 13:46:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@105 -- # continue 2 00:17:13.190 13:46:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:13.190 13:46:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.190 13:46:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:13.190 13:46:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.190 13:46:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.190 13:46:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:13.190 192.168.100.9' 00:17:13.190 13:46:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:13.190 192.168.100.9' 00:17:13.190 13:46:15 -- nvmf/common.sh@446 -- # head -n 1 00:17:13.190 13:46:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:13.190 13:46:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:13.190 192.168.100.9' 00:17:13.190 13:46:15 -- nvmf/common.sh@447 -- # tail -n +2 00:17:13.190 13:46:15 -- nvmf/common.sh@447 -- # head -n 1 00:17:13.190 13:46:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:13.190 13:46:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:13.190 13:46:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:13.190 13:46:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:13.190 13:46:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:13.190 13:46:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:13.190 13:46:15 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:13.190 13:46:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:13.190 13:46:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:13.190 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.190 13:46:15 -- nvmf/common.sh@470 -- # nvmfpid=1164999 00:17:13.190 13:46:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:13.190 13:46:15 -- nvmf/common.sh@471 -- # waitforlisten 1164999 00:17:13.190 13:46:15 -- common/autotest_common.sh@817 -- # '[' -z 1164999 ']' 00:17:13.190 13:46:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.190 13:46:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:13.190 13:46:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.190 13:46:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:13.190 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 [2024-04-18 13:46:15.512076] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:13.191 [2024-04-18 13:46:15.512178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.191 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.191 [2024-04-18 13:46:15.596980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.191 [2024-04-18 13:46:15.717391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.191 [2024-04-18 13:46:15.717457] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.191 [2024-04-18 13:46:15.717473] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.191 [2024-04-18 13:46:15.717486] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.191 [2024-04-18 13:46:15.717499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.191 [2024-04-18 13:46:15.717532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.191 13:46:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.191 13:46:15 -- common/autotest_common.sh@850 -- # return 0 00:17:13.191 13:46:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:13.191 13:46:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 13:46:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.191 13:46:15 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:13.191 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 [2024-04-18 13:46:15.901661] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af9f50/0x1afe440) succeed. 00:17:13.191 [2024-04-18 13:46:15.913824] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1afb450/0x1b3fad0) succeed. 00:17:13.191 13:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.191 13:46:15 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:13.191 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 null0 00:17:13.191 13:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.191 13:46:15 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:13.191 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 13:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.191 13:46:15 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:13.191 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 13:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.191 13:46:15 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ad51fbf146b64385821ea93887f912f6 00:17:13.191 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.191 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 13:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:15 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:13.449 13:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 [2024-04-18 13:46:16.000034] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 nvme0n1 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 [ 00:17:13.449 { 00:17:13.449 "name": "nvme0n1", 00:17:13.449 "aliases": [ 00:17:13.449 "ad51fbf1-46b6-4385-821e-a93887f912f6" 00:17:13.449 ], 00:17:13.449 "product_name": "NVMe disk", 00:17:13.449 "block_size": 512, 00:17:13.449 "num_blocks": 2097152, 00:17:13.449 "uuid": "ad51fbf1-46b6-4385-821e-a93887f912f6", 00:17:13.449 "assigned_rate_limits": { 00:17:13.449 "rw_ios_per_sec": 0, 00:17:13.449 "rw_mbytes_per_sec": 0, 00:17:13.449 "r_mbytes_per_sec": 0, 00:17:13.449 "w_mbytes_per_sec": 0 00:17:13.449 }, 00:17:13.449 "claimed": false, 00:17:13.449 "zoned": false, 00:17:13.449 "supported_io_types": { 00:17:13.449 "read": true, 00:17:13.449 "write": true, 00:17:13.449 "unmap": false, 00:17:13.449 "write_zeroes": true, 00:17:13.449 "flush": true, 00:17:13.449 "reset": true, 00:17:13.449 "compare": true, 00:17:13.449 "compare_and_write": true, 00:17:13.449 "abort": true, 00:17:13.449 "nvme_admin": true, 00:17:13.449 "nvme_io": true 00:17:13.449 }, 00:17:13.449 "memory_domains": [ 00:17:13.449 { 00:17:13.449 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:13.449 "dma_device_type": 0 00:17:13.449 } 00:17:13.449 ], 00:17:13.449 "driver_specific": { 00:17:13.449 "nvme": [ 00:17:13.449 { 00:17:13.449 "trid": { 00:17:13.449 "trtype": "RDMA", 00:17:13.449 "adrfam": "IPv4", 00:17:13.449 "traddr": "192.168.100.8", 00:17:13.449 "trsvcid": "4420", 00:17:13.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.449 }, 00:17:13.449 "ctrlr_data": { 00:17:13.449 "cntlid": 1, 00:17:13.449 "vendor_id": "0x8086", 00:17:13.449 "model_number": "SPDK bdev Controller", 00:17:13.449 "serial_number": "00000000000000000000", 00:17:13.449 "firmware_revision": "24.05", 00:17:13.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.449 "oacs": { 00:17:13.449 "security": 0, 00:17:13.449 "format": 0, 00:17:13.449 "firmware": 0, 00:17:13.449 "ns_manage": 0 00:17:13.449 }, 00:17:13.449 "multi_ctrlr": true, 00:17:13.449 "ana_reporting": false 00:17:13.449 }, 00:17:13.449 "vs": { 00:17:13.449 "nvme_version": "1.3" 00:17:13.449 }, 00:17:13.449 "ns_data": { 00:17:13.449 "id": 1, 00:17:13.449 "can_share": true 00:17:13.449 } 00:17:13.449 } 00:17:13.449 ], 00:17:13.449 "mp_policy": "active_passive" 00:17:13.449 } 00:17:13.449 } 00:17:13.449 ] 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 [2024-04-18 13:46:16.118093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.449 [2024-04-18 13:46:16.142105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:13.449 [2024-04-18 13:46:16.169204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 [ 00:17:13.449 { 00:17:13.449 "name": "nvme0n1", 00:17:13.449 "aliases": [ 00:17:13.449 "ad51fbf1-46b6-4385-821e-a93887f912f6" 00:17:13.449 ], 00:17:13.449 "product_name": "NVMe disk", 00:17:13.449 "block_size": 512, 00:17:13.449 "num_blocks": 2097152, 00:17:13.449 "uuid": "ad51fbf1-46b6-4385-821e-a93887f912f6", 00:17:13.449 "assigned_rate_limits": { 00:17:13.449 "rw_ios_per_sec": 0, 00:17:13.449 "rw_mbytes_per_sec": 0, 00:17:13.449 "r_mbytes_per_sec": 0, 00:17:13.449 "w_mbytes_per_sec": 0 00:17:13.449 }, 00:17:13.449 "claimed": false, 00:17:13.449 "zoned": false, 00:17:13.449 "supported_io_types": { 00:17:13.449 "read": true, 00:17:13.449 "write": true, 00:17:13.449 "unmap": false, 00:17:13.449 "write_zeroes": true, 00:17:13.449 "flush": true, 00:17:13.449 "reset": true, 00:17:13.449 "compare": true, 00:17:13.449 "compare_and_write": true, 00:17:13.449 "abort": true, 00:17:13.449 "nvme_admin": true, 00:17:13.449 "nvme_io": true 00:17:13.449 }, 00:17:13.449 "memory_domains": [ 00:17:13.449 { 00:17:13.449 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:13.449 "dma_device_type": 0 00:17:13.449 } 00:17:13.449 ], 00:17:13.449 "driver_specific": { 00:17:13.449 "nvme": [ 00:17:13.449 { 00:17:13.449 "trid": { 00:17:13.449 "trtype": "RDMA", 00:17:13.449 "adrfam": "IPv4", 00:17:13.449 "traddr": "192.168.100.8", 00:17:13.449 "trsvcid": "4420", 00:17:13.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.449 }, 00:17:13.449 "ctrlr_data": { 00:17:13.449 "cntlid": 2, 00:17:13.449 "vendor_id": "0x8086", 00:17:13.449 "model_number": "SPDK bdev Controller", 00:17:13.449 "serial_number": "00000000000000000000", 00:17:13.449 "firmware_revision": "24.05", 00:17:13.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.449 "oacs": { 00:17:13.449 "security": 0, 00:17:13.449 "format": 0, 00:17:13.449 "firmware": 0, 00:17:13.449 "ns_manage": 0 00:17:13.449 }, 00:17:13.449 "multi_ctrlr": true, 00:17:13.449 "ana_reporting": false 00:17:13.449 }, 00:17:13.449 "vs": { 00:17:13.449 "nvme_version": "1.3" 00:17:13.449 }, 00:17:13.449 "ns_data": { 00:17:13.449 "id": 1, 00:17:13.449 "can_share": true 00:17:13.449 } 00:17:13.449 } 00:17:13.449 ], 00:17:13.449 "mp_policy": "active_passive" 00:17:13.449 } 00:17:13.449 } 00:17:13.449 ] 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@53 -- # mktemp 00:17:13.449 13:46:16 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1DDCOXHuOL 00:17:13.449 13:46:16 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:13.449 13:46:16 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1DDCOXHuOL 00:17:13.449 13:46:16 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 [2024-04-18 13:46:16.236588] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1DDCOXHuOL 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.449 13:46:16 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1DDCOXHuOL 00:17:13.449 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.449 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.706 [2024-04-18 13:46:16.252602] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.706 nvme0n1 00:17:13.706 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.706 13:46:16 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:13.707 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.707 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.707 [ 00:17:13.707 { 00:17:13.707 "name": "nvme0n1", 00:17:13.707 "aliases": [ 00:17:13.707 "ad51fbf1-46b6-4385-821e-a93887f912f6" 00:17:13.707 ], 00:17:13.707 "product_name": "NVMe disk", 00:17:13.707 "block_size": 512, 00:17:13.707 "num_blocks": 2097152, 00:17:13.707 "uuid": "ad51fbf1-46b6-4385-821e-a93887f912f6", 00:17:13.707 "assigned_rate_limits": { 00:17:13.707 "rw_ios_per_sec": 0, 00:17:13.707 "rw_mbytes_per_sec": 0, 00:17:13.707 "r_mbytes_per_sec": 0, 00:17:13.707 "w_mbytes_per_sec": 0 00:17:13.707 }, 00:17:13.707 "claimed": false, 00:17:13.707 "zoned": false, 00:17:13.707 "supported_io_types": { 00:17:13.707 "read": true, 00:17:13.707 "write": true, 00:17:13.707 "unmap": false, 00:17:13.707 "write_zeroes": true, 00:17:13.707 "flush": true, 00:17:13.707 "reset": true, 00:17:13.707 "compare": true, 00:17:13.707 "compare_and_write": true, 00:17:13.707 "abort": true, 00:17:13.707 "nvme_admin": true, 00:17:13.707 "nvme_io": true 00:17:13.707 }, 00:17:13.707 "memory_domains": [ 00:17:13.707 { 00:17:13.707 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:13.707 "dma_device_type": 0 00:17:13.707 } 00:17:13.707 ], 00:17:13.707 "driver_specific": { 00:17:13.707 "nvme": [ 00:17:13.707 { 00:17:13.707 "trid": { 00:17:13.707 "trtype": "RDMA", 00:17:13.707 "adrfam": "IPv4", 00:17:13.707 "traddr": "192.168.100.8", 00:17:13.707 "trsvcid": "4421", 00:17:13.707 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.707 }, 00:17:13.707 "ctrlr_data": { 00:17:13.707 "cntlid": 3, 00:17:13.707 "vendor_id": "0x8086", 00:17:13.707 "model_number": "SPDK bdev Controller", 00:17:13.707 "serial_number": "00000000000000000000", 00:17:13.707 "firmware_revision": "24.05", 00:17:13.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.707 "oacs": { 00:17:13.707 "security": 0, 00:17:13.707 "format": 0, 00:17:13.707 "firmware": 0, 00:17:13.707 "ns_manage": 0 00:17:13.707 }, 00:17:13.707 "multi_ctrlr": true, 00:17:13.707 "ana_reporting": false 00:17:13.707 }, 00:17:13.707 "vs": { 00:17:13.707 "nvme_version": "1.3" 00:17:13.707 }, 00:17:13.707 "ns_data": { 00:17:13.707 "id": 1, 00:17:13.707 "can_share": true 00:17:13.707 } 00:17:13.707 } 00:17:13.707 ], 00:17:13.707 "mp_policy": "active_passive" 00:17:13.707 } 00:17:13.707 } 00:17:13.707 ] 00:17:13.707 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.707 13:46:16 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.707 13:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.707 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.707 13:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.707 13:46:16 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.1DDCOXHuOL 00:17:13.707 13:46:16 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:13.707 13:46:16 -- host/async_init.sh@78 -- # nvmftestfini 00:17:13.707 13:46:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:13.707 13:46:16 -- nvmf/common.sh@117 -- # sync 00:17:13.707 13:46:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:13.707 13:46:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:13.707 13:46:16 -- nvmf/common.sh@120 -- # set +e 00:17:13.707 13:46:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.707 13:46:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:13.707 rmmod nvme_rdma 00:17:13.707 rmmod nvme_fabrics 00:17:13.707 13:46:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.707 13:46:16 -- nvmf/common.sh@124 -- # set -e 00:17:13.707 13:46:16 -- nvmf/common.sh@125 -- # return 0 00:17:13.707 13:46:16 -- nvmf/common.sh@478 -- # '[' -n 1164999 ']' 00:17:13.707 13:46:16 -- nvmf/common.sh@479 -- # killprocess 1164999 00:17:13.707 13:46:16 -- common/autotest_common.sh@936 -- # '[' -z 1164999 ']' 00:17:13.707 13:46:16 -- common/autotest_common.sh@940 -- # kill -0 1164999 00:17:13.707 13:46:16 -- common/autotest_common.sh@941 -- # uname 00:17:13.707 13:46:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.707 13:46:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1164999 00:17:13.707 13:46:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:13.707 13:46:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:13.707 13:46:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1164999' 00:17:13.707 killing process with pid 1164999 00:17:13.707 13:46:16 -- common/autotest_common.sh@955 -- # kill 1164999 00:17:13.707 13:46:16 -- common/autotest_common.sh@960 -- # wait 1164999 00:17:14.273 13:46:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:14.273 13:46:16 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:14.273 00:17:14.273 real 0m4.245s 00:17:14.273 user 0m2.273s 00:17:14.273 sys 0m2.430s 00:17:14.273 13:46:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.273 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:14.273 ************************************ 00:17:14.273 END TEST nvmf_async_init 00:17:14.273 ************************************ 00:17:14.273 13:46:16 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:14.273 13:46:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:14.273 13:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.273 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:14.273 ************************************ 00:17:14.273 START TEST dma 00:17:14.273 ************************************ 00:17:14.273 13:46:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:14.273 * Looking for test storage... 00:17:14.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:14.273 13:46:16 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.273 13:46:16 -- nvmf/common.sh@7 -- # uname -s 00:17:14.273 13:46:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.273 13:46:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.273 13:46:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.273 13:46:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.273 13:46:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.273 13:46:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.273 13:46:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.273 13:46:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.273 13:46:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.273 13:46:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.273 13:46:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:14.273 13:46:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:14.273 13:46:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.273 13:46:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.273 13:46:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.273 13:46:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.273 13:46:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:14.273 13:46:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.273 13:46:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.273 13:46:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.273 13:46:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 13:46:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 13:46:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 13:46:16 -- paths/export.sh@5 -- # export PATH 00:17:14.273 13:46:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 13:46:16 -- nvmf/common.sh@47 -- # : 0 00:17:14.273 13:46:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.273 13:46:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.273 13:46:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.273 13:46:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.273 13:46:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.273 13:46:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.273 13:46:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.273 13:46:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.273 13:46:16 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:17:14.273 13:46:16 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:17:14.273 13:46:16 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:17:14.273 13:46:16 -- host/dma.sh@18 -- # subsystem=0 00:17:14.273 13:46:16 -- host/dma.sh@93 -- # nvmftestinit 00:17:14.273 13:46:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:14.273 13:46:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.273 13:46:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:14.273 13:46:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:14.273 13:46:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:14.273 13:46:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.273 13:46:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.273 13:46:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.273 13:46:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:14.273 13:46:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:14.273 13:46:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.273 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:16.799 13:46:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.799 13:46:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.799 13:46:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.799 13:46:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.799 13:46:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.799 13:46:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.799 13:46:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.799 13:46:19 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.799 13:46:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.799 13:46:19 -- nvmf/common.sh@296 -- # e810=() 00:17:16.799 13:46:19 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.799 13:46:19 -- nvmf/common.sh@297 -- # x722=() 00:17:16.799 13:46:19 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.799 13:46:19 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.799 13:46:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.799 13:46:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.799 13:46:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.799 13:46:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:16.799 13:46:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:16.799 13:46:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:16.799 13:46:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.799 13:46:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.799 13:46:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:16.799 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:16.799 13:46:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.799 13:46:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.799 13:46:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:16.799 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:16.799 13:46:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.799 13:46:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.799 13:46:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:16.799 13:46:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.799 13:46:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.799 13:46:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.799 13:46:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.799 13:46:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:16.799 Found net devices under 0000:81:00.0: mlx_0_0 00:17:16.800 13:46:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.800 13:46:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.800 13:46:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.800 13:46:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.800 13:46:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:16.800 Found net devices under 0000:81:00.1: mlx_0_1 00:17:16.800 13:46:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.800 13:46:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.800 13:46:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.800 13:46:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:16.800 13:46:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:16.800 13:46:19 -- nvmf/common.sh@58 -- # uname 00:17:16.800 13:46:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:16.800 13:46:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:16.800 13:46:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:16.800 13:46:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:16.800 13:46:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:16.800 13:46:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:16.800 13:46:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:16.800 13:46:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:16.800 13:46:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:16.800 13:46:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:16.800 13:46:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:16.800 13:46:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.800 13:46:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:16.800 13:46:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:16.800 13:46:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.800 13:46:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:16.800 13:46:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:16.800 13:46:19 -- nvmf/common.sh@105 -- # continue 2 00:17:16.800 13:46:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.800 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:16.800 13:46:19 -- nvmf/common.sh@105 -- # continue 2 00:17:16.800 13:46:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:16.800 13:46:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:16.800 13:46:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.800 13:46:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:16.800 13:46:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:16.800 13:46:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:16.800 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.800 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:16.800 altname enp129s0f0np0 00:17:16.800 inet 192.168.100.8/24 scope global mlx_0_0 00:17:16.800 valid_lft forever preferred_lft forever 00:17:16.800 13:46:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:16.800 13:46:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:16.800 13:46:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.800 13:46:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.058 13:46:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:17.058 13:46:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:17.058 13:46:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:17.058 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:17.058 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:17.058 altname enp129s0f1np1 00:17:17.058 inet 192.168.100.9/24 scope global mlx_0_1 00:17:17.058 valid_lft forever preferred_lft forever 00:17:17.058 13:46:19 -- nvmf/common.sh@411 -- # return 0 00:17:17.058 13:46:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:17.058 13:46:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:17.058 13:46:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:17.058 13:46:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:17.058 13:46:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:17.058 13:46:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.058 13:46:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:17.058 13:46:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:17.058 13:46:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.058 13:46:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:17.058 13:46:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.058 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.058 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:17.058 13:46:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:17.058 13:46:19 -- nvmf/common.sh@105 -- # continue 2 00:17:17.058 13:46:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.058 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.058 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:17.058 13:46:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.058 13:46:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:17.058 13:46:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:17.058 13:46:19 -- nvmf/common.sh@105 -- # continue 2 00:17:17.058 13:46:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.058 13:46:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:17.058 13:46:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.058 13:46:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.058 13:46:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:17.058 13:46:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.058 13:46:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.058 13:46:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:17.058 192.168.100.9' 00:17:17.058 13:46:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:17.058 192.168.100.9' 00:17:17.058 13:46:19 -- nvmf/common.sh@446 -- # head -n 1 00:17:17.058 13:46:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:17.058 13:46:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:17.058 192.168.100.9' 00:17:17.058 13:46:19 -- nvmf/common.sh@447 -- # tail -n +2 00:17:17.058 13:46:19 -- nvmf/common.sh@447 -- # head -n 1 00:17:17.058 13:46:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:17.058 13:46:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:17.058 13:46:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:17.058 13:46:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:17.058 13:46:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:17.058 13:46:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:17.058 13:46:19 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:17:17.058 13:46:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:17.058 13:46:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.058 13:46:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.058 13:46:19 -- nvmf/common.sh@470 -- # nvmfpid=1167116 00:17:17.058 13:46:19 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:17.058 13:46:19 -- nvmf/common.sh@471 -- # waitforlisten 1167116 00:17:17.058 13:46:19 -- common/autotest_common.sh@817 -- # '[' -z 1167116 ']' 00:17:17.058 13:46:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.058 13:46:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.058 13:46:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.058 13:46:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.058 13:46:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.058 [2024-04-18 13:46:19.727449] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:17.058 [2024-04-18 13:46:19.727564] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.058 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.058 [2024-04-18 13:46:19.814619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.315 [2024-04-18 13:46:19.935231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.315 [2024-04-18 13:46:19.935301] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.315 [2024-04-18 13:46:19.935318] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.315 [2024-04-18 13:46:19.935332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.316 [2024-04-18 13:46:19.935344] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.316 [2024-04-18 13:46:19.935436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.316 [2024-04-18 13:46:19.935443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.316 13:46:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.316 13:46:20 -- common/autotest_common.sh@850 -- # return 0 00:17:17.316 13:46:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:17.316 13:46:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.316 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 13:46:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.316 13:46:20 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:17.316 13:46:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.316 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 [2024-04-18 13:46:20.116017] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2475a30/0x2479f20) succeed. 00:17:17.573 [2024-04-18 13:46:20.128459] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2476f30/0x24bb5b0) succeed. 00:17:17.573 13:46:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.573 13:46:20 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:17:17.573 13:46:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.573 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.573 Malloc0 00:17:17.573 13:46:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.573 13:46:20 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:17.573 13:46:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.573 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.573 13:46:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.573 13:46:20 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:17.573 13:46:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.573 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.573 13:46:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.573 13:46:20 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:17.573 13:46:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.573 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:17:17.573 [2024-04-18 13:46:20.327298] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.573 13:46:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.573 13:46:20 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:17:17.573 13:46:20 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:17:17.573 13:46:20 -- nvmf/common.sh@521 -- # config=() 00:17:17.573 13:46:20 -- nvmf/common.sh@521 -- # local subsystem config 00:17:17.573 13:46:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.573 13:46:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.573 { 00:17:17.573 "params": { 00:17:17.573 "name": "Nvme$subsystem", 00:17:17.573 "trtype": "$TEST_TRANSPORT", 00:17:17.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.573 "adrfam": "ipv4", 00:17:17.573 "trsvcid": "$NVMF_PORT", 00:17:17.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.573 "hdgst": ${hdgst:-false}, 00:17:17.573 "ddgst": ${ddgst:-false} 00:17:17.573 }, 00:17:17.573 "method": "bdev_nvme_attach_controller" 00:17:17.573 } 00:17:17.573 EOF 00:17:17.573 )") 00:17:17.573 13:46:20 -- nvmf/common.sh@543 -- # cat 00:17:17.573 13:46:20 -- nvmf/common.sh@545 -- # jq . 00:17:17.573 13:46:20 -- nvmf/common.sh@546 -- # IFS=, 00:17:17.573 13:46:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:17.573 "params": { 00:17:17.573 "name": "Nvme0", 00:17:17.573 "trtype": "rdma", 00:17:17.573 "traddr": "192.168.100.8", 00:17:17.573 "adrfam": "ipv4", 00:17:17.573 "trsvcid": "4420", 00:17:17.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:17.573 "hdgst": false, 00:17:17.573 "ddgst": false 00:17:17.573 }, 00:17:17.573 "method": "bdev_nvme_attach_controller" 00:17:17.573 }' 00:17:17.831 [2024-04-18 13:46:20.379695] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:17.831 [2024-04-18 13:46:20.379794] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167249 ] 00:17:17.831 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.831 [2024-04-18 13:46:20.468539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.831 [2024-04-18 13:46:20.596964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.831 [2024-04-18 13:46:20.596975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.380 bdev Nvme0n1 reports 1 memory domains 00:17:24.380 bdev Nvme0n1 supports RDMA memory domain 00:17:24.380 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:24.380 ========================================================================== 00:17:24.380 Latency [us] 00:17:24.380 IOPS MiB/s Average min max 00:17:24.380 Core 2: 16585.65 64.79 963.75 413.80 8825.04 00:17:24.380 Core 3: 16736.82 65.38 954.99 372.50 8962.59 00:17:24.380 ========================================================================== 00:17:24.380 Total : 33322.48 130.17 959.35 372.50 8962.59 00:17:24.380 00:17:24.380 Total operations: 166646, translate 166646 pull_push 0 memzero 0 00:17:24.380 13:46:26 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:17:24.380 13:46:26 -- host/dma.sh@107 -- # gen_malloc_json 00:17:24.380 13:46:26 -- host/dma.sh@21 -- # jq . 00:17:24.380 [2024-04-18 13:46:26.179032] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:24.380 [2024-04-18 13:46:26.179142] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167913 ] 00:17:24.380 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.380 [2024-04-18 13:46:26.265692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:24.380 [2024-04-18 13:46:26.385682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.380 [2024-04-18 13:46:26.385686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.638 bdev Malloc0 reports 2 memory domains 00:17:29.638 bdev Malloc0 doesn't support RDMA memory domain 00:17:29.638 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:29.638 ========================================================================== 00:17:29.638 Latency [us] 00:17:29.638 IOPS MiB/s Average min max 00:17:29.638 Core 2: 10952.80 42.78 1459.69 513.33 2239.88 00:17:29.638 Core 3: 11167.11 43.62 1431.62 538.74 2587.49 00:17:29.638 ========================================================================== 00:17:29.638 Total : 22119.91 86.41 1445.51 513.33 2587.49 00:17:29.638 00:17:29.638 Total operations: 110648, translate 0 pull_push 442592 memzero 0 00:17:29.638 13:46:31 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:17:29.638 13:46:31 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:17:29.638 13:46:31 -- host/dma.sh@48 -- # local subsystem=0 00:17:29.638 13:46:31 -- host/dma.sh@50 -- # jq . 00:17:29.638 Ignoring -M option 00:17:29.638 [2024-04-18 13:46:31.917043] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:29.638 [2024-04-18 13:46:31.917146] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168568 ] 00:17:29.638 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.638 [2024-04-18 13:46:32.002661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.638 [2024-04-18 13:46:32.125507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.638 [2024-04-18 13:46:32.125512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.638 [2024-04-18 13:46:32.391703] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:17:34.891 [2024-04-18 13:46:37.422781] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:17:35.150 bdev 7b84936c-7415-421c-8816-ca6c25905287 reports 1 memory domains 00:17:35.150 bdev 7b84936c-7415-421c-8816-ca6c25905287 supports RDMA memory domain 00:17:35.150 Initialization complete, running randread IO for 5 sec on 2 cores 00:17:35.150 ========================================================================== 00:17:35.150 Latency [us] 00:17:35.150 IOPS MiB/s Average min max 00:17:35.150 Core 2: 63837.33 249.36 249.58 83.00 1901.99 00:17:35.150 Core 3: 66445.09 259.55 239.77 73.00 1922.12 00:17:35.150 ========================================================================== 00:17:35.150 Total : 130282.41 508.92 244.58 73.00 1922.12 00:17:35.150 00:17:35.150 Total operations: 651472, translate 0 pull_push 0 memzero 651472 00:17:35.150 13:46:37 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:17:35.150 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.150 [2024-04-18 13:46:37.842577] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:37.678 Initializing NVMe Controllers 00:17:37.678 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:37.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:17:37.678 Initialization complete. Launching workers. 00:17:37.678 ======================================================== 00:17:37.678 Latency(us) 00:17:37.678 Device Information : IOPS MiB/s Average min max 00:17:37.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7979.78 7943.66 7990.51 00:17:37.678 ======================================================== 00:17:37.678 Total : 2016.00 7.88 7979.78 7943.66 7990.51 00:17:37.678 00:17:37.678 13:46:40 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:17:37.678 13:46:40 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:17:37.678 13:46:40 -- host/dma.sh@48 -- # local subsystem=0 00:17:37.678 13:46:40 -- host/dma.sh@50 -- # jq . 00:17:37.678 [2024-04-18 13:46:40.205105] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:37.678 [2024-04-18 13:46:40.205221] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169500 ] 00:17:37.678 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.678 [2024-04-18 13:46:40.295540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:37.678 [2024-04-18 13:46:40.418469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.678 [2024-04-18 13:46:40.418474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.935 [2024-04-18 13:46:40.689636] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:17:43.210 [2024-04-18 13:46:45.721973] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:17:43.467 bdev 00d92c37-779b-44ea-95a9-d1cd770b2bc1 reports 1 memory domains 00:17:43.467 bdev 00d92c37-779b-44ea-95a9-d1cd770b2bc1 supports RDMA memory domain 00:17:43.467 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:43.467 ========================================================================== 00:17:43.467 Latency [us] 00:17:43.467 IOPS MiB/s Average min max 00:17:43.467 Core 2: 14106.42 55.10 1133.21 81.18 9865.30 00:17:43.467 Core 3: 14542.28 56.81 1099.24 13.99 9625.90 00:17:43.467 ========================================================================== 00:17:43.467 Total : 28648.71 111.91 1115.97 13.99 9865.30 00:17:43.467 00:17:43.467 Total operations: 143290, translate 143183 pull_push 0 memzero 107 00:17:43.467 13:46:46 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:17:43.467 13:46:46 -- host/dma.sh@120 -- # nvmftestfini 00:17:43.467 13:46:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:43.467 13:46:46 -- nvmf/common.sh@117 -- # sync 00:17:43.467 13:46:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:43.467 13:46:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:43.467 13:46:46 -- nvmf/common.sh@120 -- # set +e 00:17:43.467 13:46:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.467 13:46:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:43.467 rmmod nvme_rdma 00:17:43.467 rmmod nvme_fabrics 00:17:43.467 13:46:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.467 13:46:46 -- nvmf/common.sh@124 -- # set -e 00:17:43.467 13:46:46 -- nvmf/common.sh@125 -- # return 0 00:17:43.467 13:46:46 -- nvmf/common.sh@478 -- # '[' -n 1167116 ']' 00:17:43.467 13:46:46 -- nvmf/common.sh@479 -- # killprocess 1167116 00:17:43.467 13:46:46 -- common/autotest_common.sh@936 -- # '[' -z 1167116 ']' 00:17:43.467 13:46:46 -- common/autotest_common.sh@940 -- # kill -0 1167116 00:17:43.467 13:46:46 -- common/autotest_common.sh@941 -- # uname 00:17:43.467 13:46:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.467 13:46:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1167116 00:17:43.467 13:46:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:43.467 13:46:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:43.467 13:46:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1167116' 00:17:43.467 killing process with pid 1167116 00:17:43.467 13:46:46 -- common/autotest_common.sh@955 -- # kill 1167116 00:17:43.467 13:46:46 -- common/autotest_common.sh@960 -- # wait 1167116 00:17:44.032 13:46:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.032 13:46:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:44.032 00:17:44.032 real 0m29.681s 00:17:44.032 user 1m37.875s 00:17:44.032 sys 0m3.350s 00:17:44.032 13:46:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:44.032 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:17:44.032 ************************************ 00:17:44.032 END TEST dma 00:17:44.032 ************************************ 00:17:44.032 13:46:46 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:44.032 13:46:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.032 13:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.032 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:17:44.032 ************************************ 00:17:44.032 START TEST nvmf_identify 00:17:44.032 ************************************ 00:17:44.032 13:46:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:44.032 * Looking for test storage... 00:17:44.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:44.032 13:46:46 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.032 13:46:46 -- nvmf/common.sh@7 -- # uname -s 00:17:44.032 13:46:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.032 13:46:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.032 13:46:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.032 13:46:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.032 13:46:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.032 13:46:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.032 13:46:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.032 13:46:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.032 13:46:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.032 13:46:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.032 13:46:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:44.032 13:46:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:44.032 13:46:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.032 13:46:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.032 13:46:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.032 13:46:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.032 13:46:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:44.032 13:46:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.032 13:46:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.032 13:46:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.032 13:46:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.032 13:46:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.033 13:46:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.033 13:46:46 -- paths/export.sh@5 -- # export PATH 00:17:44.033 13:46:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.033 13:46:46 -- nvmf/common.sh@47 -- # : 0 00:17:44.033 13:46:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.033 13:46:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.033 13:46:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.033 13:46:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.033 13:46:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.033 13:46:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.033 13:46:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.033 13:46:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.033 13:46:46 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.033 13:46:46 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.033 13:46:46 -- host/identify.sh@14 -- # nvmftestinit 00:17:44.033 13:46:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:44.033 13:46:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.033 13:46:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:44.033 13:46:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:44.033 13:46:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:44.033 13:46:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.033 13:46:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.033 13:46:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.033 13:46:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:44.033 13:46:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:44.033 13:46:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.033 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:17:46.589 13:46:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:46.589 13:46:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.589 13:46:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.589 13:46:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.589 13:46:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.589 13:46:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.589 13:46:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.589 13:46:49 -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.589 13:46:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.589 13:46:49 -- nvmf/common.sh@296 -- # e810=() 00:17:46.589 13:46:49 -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.589 13:46:49 -- nvmf/common.sh@297 -- # x722=() 00:17:46.589 13:46:49 -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.589 13:46:49 -- nvmf/common.sh@298 -- # mlx=() 00:17:46.589 13:46:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.589 13:46:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.589 13:46:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.589 13:46:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.589 13:46:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:46.589 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:46.589 13:46:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:46.589 13:46:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.589 13:46:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:46.589 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:46.589 13:46:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:46.589 13:46:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.589 13:46:49 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.589 13:46:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.589 13:46:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.589 13:46:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.589 13:46:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:46.589 Found net devices under 0000:81:00.0: mlx_0_0 00:17:46.589 13:46:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.589 13:46:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.589 13:46:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.589 13:46:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.589 13:46:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:46.589 Found net devices under 0000:81:00.1: mlx_0_1 00:17:46.589 13:46:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.589 13:46:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:46.589 13:46:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:46.589 13:46:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:46.589 13:46:49 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:46.589 13:46:49 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:46.589 13:46:49 -- nvmf/common.sh@58 -- # uname 00:17:46.589 13:46:49 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:46.589 13:46:49 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:46.589 13:46:49 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:46.589 13:46:49 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:46.589 13:46:49 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:46.848 13:46:49 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:46.848 13:46:49 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:46.848 13:46:49 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:46.848 13:46:49 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:46.848 13:46:49 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:46.848 13:46:49 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:46.848 13:46:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:46.848 13:46:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:46.848 13:46:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:46.848 13:46:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:46.848 13:46:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:46.848 13:46:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@105 -- # continue 2 00:17:46.848 13:46:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@105 -- # continue 2 00:17:46.848 13:46:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:46.848 13:46:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:46.848 13:46:49 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:46.848 13:46:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:46.848 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:46.848 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:46.848 altname enp129s0f0np0 00:17:46.848 inet 192.168.100.8/24 scope global mlx_0_0 00:17:46.848 valid_lft forever preferred_lft forever 00:17:46.848 13:46:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:46.848 13:46:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:46.848 13:46:49 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:46.848 13:46:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:46.848 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:46.848 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:46.848 altname enp129s0f1np1 00:17:46.848 inet 192.168.100.9/24 scope global mlx_0_1 00:17:46.848 valid_lft forever preferred_lft forever 00:17:46.848 13:46:49 -- nvmf/common.sh@411 -- # return 0 00:17:46.848 13:46:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:46.848 13:46:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:46.848 13:46:49 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:46.848 13:46:49 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:46.848 13:46:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:46.848 13:46:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:46.848 13:46:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:46.848 13:46:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:46.848 13:46:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:46.848 13:46:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@105 -- # continue 2 00:17:46.848 13:46:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:46.848 13:46:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:46.848 13:46:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@105 -- # continue 2 00:17:46.848 13:46:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:46.848 13:46:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:46.848 13:46:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:46.848 13:46:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:46.848 13:46:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:46.848 13:46:49 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:46.848 192.168.100.9' 00:17:46.848 13:46:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:46.848 192.168.100.9' 00:17:46.848 13:46:49 -- nvmf/common.sh@446 -- # head -n 1 00:17:46.848 13:46:49 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:46.848 13:46:49 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:46.848 192.168.100.9' 00:17:46.848 13:46:49 -- nvmf/common.sh@447 -- # tail -n +2 00:17:46.848 13:46:49 -- nvmf/common.sh@447 -- # head -n 1 00:17:46.849 13:46:49 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:46.849 13:46:49 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:46.849 13:46:49 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:46.849 13:46:49 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:46.849 13:46:49 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:46.849 13:46:49 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:46.849 13:46:49 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:46.849 13:46:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.849 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:17:46.849 13:46:49 -- host/identify.sh@19 -- # nvmfpid=1172269 00:17:46.849 13:46:49 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.849 13:46:49 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.849 13:46:49 -- host/identify.sh@23 -- # waitforlisten 1172269 00:17:46.849 13:46:49 -- common/autotest_common.sh@817 -- # '[' -z 1172269 ']' 00:17:46.849 13:46:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.849 13:46:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.849 13:46:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.849 13:46:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.849 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:17:46.849 [2024-04-18 13:46:49.558590] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:46.849 [2024-04-18 13:46:49.558672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.849 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.849 [2024-04-18 13:46:49.632099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.107 [2024-04-18 13:46:49.755627] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.107 [2024-04-18 13:46:49.755684] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.107 [2024-04-18 13:46:49.755700] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.107 [2024-04-18 13:46:49.755714] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.107 [2024-04-18 13:46:49.755726] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.107 [2024-04-18 13:46:49.755813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.107 [2024-04-18 13:46:49.755867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.107 [2024-04-18 13:46:49.755921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.107 [2024-04-18 13:46:49.755923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.040 13:46:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.040 13:46:50 -- common/autotest_common.sh@850 -- # return 0 00:17:48.040 13:46:50 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:48.040 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.040 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.040 [2024-04-18 13:46:50.609365] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2416090/0x241a580) succeed. 00:17:48.040 [2024-04-18 13:46:50.621582] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2417680/0x245bc10) succeed. 00:17:48.040 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.040 13:46:50 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:48.040 13:46:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:48.040 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.040 13:46:50 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.040 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.040 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.040 Malloc0 00:17:48.040 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.040 13:46:50 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.040 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.040 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.301 13:46:50 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:48.301 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.301 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.301 13:46:50 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:48.301 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.301 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 [2024-04-18 13:46:50.861535] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:48.301 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.301 13:46:50 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:48.301 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.301 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.301 13:46:50 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:48.301 13:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.301 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.301 [2024-04-18 13:46:50.877194] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:48.301 [ 00:17:48.301 { 00:17:48.301 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:48.301 "subtype": "Discovery", 00:17:48.301 "listen_addresses": [ 00:17:48.301 { 00:17:48.301 "transport": "RDMA", 00:17:48.301 "trtype": "RDMA", 00:17:48.301 "adrfam": "IPv4", 00:17:48.301 "traddr": "192.168.100.8", 00:17:48.301 "trsvcid": "4420" 00:17:48.301 } 00:17:48.301 ], 00:17:48.301 "allow_any_host": true, 00:17:48.301 "hosts": [] 00:17:48.301 }, 00:17:48.301 { 00:17:48.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.301 "subtype": "NVMe", 00:17:48.301 "listen_addresses": [ 00:17:48.301 { 00:17:48.301 "transport": "RDMA", 00:17:48.301 "trtype": "RDMA", 00:17:48.301 "adrfam": "IPv4", 00:17:48.301 "traddr": "192.168.100.8", 00:17:48.301 "trsvcid": "4420" 00:17:48.301 } 00:17:48.301 ], 00:17:48.301 "allow_any_host": true, 00:17:48.301 "hosts": [], 00:17:48.301 "serial_number": "SPDK00000000000001", 00:17:48.301 "model_number": "SPDK bdev Controller", 00:17:48.301 "max_namespaces": 32, 00:17:48.301 "min_cntlid": 1, 00:17:48.301 "max_cntlid": 65519, 00:17:48.301 "namespaces": [ 00:17:48.301 { 00:17:48.301 "nsid": 1, 00:17:48.301 "bdev_name": "Malloc0", 00:17:48.301 "name": "Malloc0", 00:17:48.301 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:48.301 "eui64": "ABCDEF0123456789", 00:17:48.301 "uuid": "c6c6d399-2b1f-4b18-a078-9ba15bddc1b9" 00:17:48.301 } 00:17:48.301 ] 00:17:48.301 } 00:17:48.301 ] 00:17:48.301 13:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.301 13:46:50 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:48.301 [2024-04-18 13:46:50.906569] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:48.302 [2024-04-18 13:46:50.906632] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172507 ] 00:17:48.302 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.302 [2024-04-18 13:46:50.965236] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:48.302 [2024-04-18 13:46:50.965338] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:48.302 [2024-04-18 13:46:50.965367] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:48.302 [2024-04-18 13:46:50.965376] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:48.302 [2024-04-18 13:46:50.965419] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:48.302 [2024-04-18 13:46:50.977498] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:48.302 [2024-04-18 13:46:50.994605] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:48.302 [2024-04-18 13:46:50.994623] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:48.302 [2024-04-18 13:46:50.994636] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994653] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994663] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994672] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994681] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994690] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994699] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994708] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994718] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994727] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994736] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994745] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994754] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994763] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994772] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994781] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994790] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994799] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994818] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994827] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994836] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994846] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994855] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994864] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994873] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994882] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994891] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994900] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994909] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994918] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.994927] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:48.302 [2024-04-18 13:46:50.994935] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:48.302 [2024-04-18 13:46:50.994954] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:48.302 [2024-04-18 13:46:50.994987] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:50.995011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x183900 00:17:48.302 [2024-04-18 13:46:51.000948] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.000970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.000992] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001005] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:48.302 [2024-04-18 13:46:51.001016] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:48.302 [2024-04-18 13:46:51.001027] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:48.302 [2024-04-18 13:46:51.001053] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.302 [2024-04-18 13:46:51.001107] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.001118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.001133] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:48.302 [2024-04-18 13:46:51.001143] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:48.302 [2024-04-18 13:46:51.001167] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.302 [2024-04-18 13:46:51.001200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.001210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.001220] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:48.302 [2024-04-18 13:46:51.001229] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001241] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001253] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.302 [2024-04-18 13:46:51.001287] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.001297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.001307] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001335] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.302 [2024-04-18 13:46:51.001369] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.001378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.001388] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:48.302 [2024-04-18 13:46:51.001397] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001406] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001416] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001527] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:48.302 [2024-04-18 13:46:51.001536] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001550] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.302 [2024-04-18 13:46:51.001590] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.302 [2024-04-18 13:46:51.001600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:48.302 [2024-04-18 13:46:51.001610] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:48.302 [2024-04-18 13:46:51.001619] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.302 [2024-04-18 13:46:51.001632] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.001645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.303 [2024-04-18 13:46:51.001667] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.001677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.001686] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:48.303 [2024-04-18 13:46:51.001695] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.001703] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.001714] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:48.303 [2024-04-18 13:46:51.001728] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.001746] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.001760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183900 00:17:48.303 [2024-04-18 13:46:51.001823] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.001834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.001849] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:48.303 [2024-04-18 13:46:51.001859] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:48.303 [2024-04-18 13:46:51.001867] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:48.303 [2024-04-18 13:46:51.001881] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:48.303 [2024-04-18 13:46:51.001891] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:48.303 [2024-04-18 13:46:51.001900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.001908] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.001920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.001933] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.001956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.303 [2024-04-18 13:46:51.001990] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002014] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.303 [2024-04-18 13:46:51.002036] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.303 [2024-04-18 13:46:51.002057] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.303 [2024-04-18 13:46:51.002078] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.303 [2024-04-18 13:46:51.002098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.002106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002125] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:48.303 [2024-04-18 13:46:51.002138] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.303 [2024-04-18 13:46:51.002183] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002204] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:48.303 [2024-04-18 13:46:51.002214] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:48.303 [2024-04-18 13:46:51.002222] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002239] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183900 00:17:48.303 [2024-04-18 13:46:51.002289] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002312] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002328] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:48.303 [2024-04-18 13:46:51.002358] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183900 00:17:48.303 [2024-04-18 13:46:51.002386] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.303 [2024-04-18 13:46:51.002429] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002460] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183900 00:17:48.303 [2024-04-18 13:46:51.002484] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002494] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002512] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002522] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002547] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183900 00:17:48.303 [2024-04-18 13:46:51.002575] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183900 00:17:48.303 [2024-04-18 13:46:51.002598] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.303 [2024-04-18 13:46:51.002608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:48.303 [2024-04-18 13:46:51.002626] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183900 00:17:48.303 ===================================================== 00:17:48.303 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:48.303 ===================================================== 00:17:48.303 Controller Capabilities/Features 00:17:48.303 ================================ 00:17:48.303 Vendor ID: 0000 00:17:48.303 Subsystem Vendor ID: 0000 00:17:48.303 Serial Number: .................... 00:17:48.303 Model Number: ........................................ 00:17:48.303 Firmware Version: 24.05 00:17:48.303 Recommended Arb Burst: 0 00:17:48.303 IEEE OUI Identifier: 00 00 00 00:17:48.303 Multi-path I/O 00:17:48.303 May have multiple subsystem ports: No 00:17:48.303 May have multiple controllers: No 00:17:48.303 Associated with SR-IOV VF: No 00:17:48.303 Max Data Transfer Size: 131072 00:17:48.303 Max Number of Namespaces: 0 00:17:48.303 Max Number of I/O Queues: 1024 00:17:48.303 NVMe Specification Version (VS): 1.3 00:17:48.303 NVMe Specification Version (Identify): 1.3 00:17:48.303 Maximum Queue Entries: 128 00:17:48.303 Contiguous Queues Required: Yes 00:17:48.303 Arbitration Mechanisms Supported 00:17:48.303 Weighted Round Robin: Not Supported 00:17:48.303 Vendor Specific: Not Supported 00:17:48.303 Reset Timeout: 15000 ms 00:17:48.303 Doorbell Stride: 4 bytes 00:17:48.303 NVM Subsystem Reset: Not Supported 00:17:48.303 Command Sets Supported 00:17:48.303 NVM Command Set: Supported 00:17:48.303 Boot Partition: Not Supported 00:17:48.303 Memory Page Size Minimum: 4096 bytes 00:17:48.303 Memory Page Size Maximum: 4096 bytes 00:17:48.303 Persistent Memory Region: Not Supported 00:17:48.304 Optional Asynchronous Events Supported 00:17:48.304 Namespace Attribute Notices: Not Supported 00:17:48.304 Firmware Activation Notices: Not Supported 00:17:48.304 ANA Change Notices: Not Supported 00:17:48.304 PLE Aggregate Log Change Notices: Not Supported 00:17:48.304 LBA Status Info Alert Notices: Not Supported 00:17:48.304 EGE Aggregate Log Change Notices: Not Supported 00:17:48.304 Normal NVM Subsystem Shutdown event: Not Supported 00:17:48.304 Zone Descriptor Change Notices: Not Supported 00:17:48.304 Discovery Log Change Notices: Supported 00:17:48.304 Controller Attributes 00:17:48.304 128-bit Host Identifier: Not Supported 00:17:48.304 Non-Operational Permissive Mode: Not Supported 00:17:48.304 NVM Sets: Not Supported 00:17:48.304 Read Recovery Levels: Not Supported 00:17:48.304 Endurance Groups: Not Supported 00:17:48.304 Predictable Latency Mode: Not Supported 00:17:48.304 Traffic Based Keep ALive: Not Supported 00:17:48.304 Namespace Granularity: Not Supported 00:17:48.304 SQ Associations: Not Supported 00:17:48.304 UUID List: Not Supported 00:17:48.304 Multi-Domain Subsystem: Not Supported 00:17:48.304 Fixed Capacity Management: Not Supported 00:17:48.304 Variable Capacity Management: Not Supported 00:17:48.304 Delete Endurance Group: Not Supported 00:17:48.304 Delete NVM Set: Not Supported 00:17:48.304 Extended LBA Formats Supported: Not Supported 00:17:48.304 Flexible Data Placement Supported: Not Supported 00:17:48.304 00:17:48.304 Controller Memory Buffer Support 00:17:48.304 ================================ 00:17:48.304 Supported: No 00:17:48.304 00:17:48.304 Persistent Memory Region Support 00:17:48.304 ================================ 00:17:48.304 Supported: No 00:17:48.304 00:17:48.304 Admin Command Set Attributes 00:17:48.304 ============================ 00:17:48.304 Security Send/Receive: Not Supported 00:17:48.304 Format NVM: Not Supported 00:17:48.304 Firmware Activate/Download: Not Supported 00:17:48.304 Namespace Management: Not Supported 00:17:48.304 Device Self-Test: Not Supported 00:17:48.304 Directives: Not Supported 00:17:48.304 NVMe-MI: Not Supported 00:17:48.304 Virtualization Management: Not Supported 00:17:48.304 Doorbell Buffer Config: Not Supported 00:17:48.304 Get LBA Status Capability: Not Supported 00:17:48.304 Command & Feature Lockdown Capability: Not Supported 00:17:48.304 Abort Command Limit: 1 00:17:48.304 Async Event Request Limit: 4 00:17:48.304 Number of Firmware Slots: N/A 00:17:48.304 Firmware Slot 1 Read-Only: N/A 00:17:48.304 Firmware Activation Without Reset: N/A 00:17:48.304 Multiple Update Detection Support: N/A 00:17:48.304 Firmware Update Granularity: No Information Provided 00:17:48.304 Per-Namespace SMART Log: No 00:17:48.304 Asymmetric Namespace Access Log Page: Not Supported 00:17:48.304 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:48.304 Command Effects Log Page: Not Supported 00:17:48.304 Get Log Page Extended Data: Supported 00:17:48.304 Telemetry Log Pages: Not Supported 00:17:48.304 Persistent Event Log Pages: Not Supported 00:17:48.304 Supported Log Pages Log Page: May Support 00:17:48.304 Commands Supported & Effects Log Page: Not Supported 00:17:48.304 Feature Identifiers & Effects Log Page:May Support 00:17:48.304 NVMe-MI Commands & Effects Log Page: May Support 00:17:48.304 Data Area 4 for Telemetry Log: Not Supported 00:17:48.304 Error Log Page Entries Supported: 128 00:17:48.304 Keep Alive: Not Supported 00:17:48.304 00:17:48.304 NVM Command Set Attributes 00:17:48.304 ========================== 00:17:48.304 Submission Queue Entry Size 00:17:48.304 Max: 1 00:17:48.304 Min: 1 00:17:48.304 Completion Queue Entry Size 00:17:48.304 Max: 1 00:17:48.304 Min: 1 00:17:48.304 Number of Namespaces: 0 00:17:48.304 Compare Command: Not Supported 00:17:48.304 Write Uncorrectable Command: Not Supported 00:17:48.304 Dataset Management Command: Not Supported 00:17:48.304 Write Zeroes Command: Not Supported 00:17:48.304 Set Features Save Field: Not Supported 00:17:48.304 Reservations: Not Supported 00:17:48.304 Timestamp: Not Supported 00:17:48.304 Copy: Not Supported 00:17:48.304 Volatile Write Cache: Not Present 00:17:48.304 Atomic Write Unit (Normal): 1 00:17:48.304 Atomic Write Unit (PFail): 1 00:17:48.304 Atomic Compare & Write Unit: 1 00:17:48.304 Fused Compare & Write: Supported 00:17:48.304 Scatter-Gather List 00:17:48.304 SGL Command Set: Supported 00:17:48.304 SGL Keyed: Supported 00:17:48.304 SGL Bit Bucket Descriptor: Not Supported 00:17:48.304 SGL Metadata Pointer: Not Supported 00:17:48.304 Oversized SGL: Not Supported 00:17:48.304 SGL Metadata Address: Not Supported 00:17:48.304 SGL Offset: Supported 00:17:48.304 Transport SGL Data Block: Not Supported 00:17:48.304 Replay Protected Memory Block: Not Supported 00:17:48.304 00:17:48.304 Firmware Slot Information 00:17:48.304 ========================= 00:17:48.304 Active slot: 0 00:17:48.304 00:17:48.304 00:17:48.304 Error Log 00:17:48.304 ========= 00:17:48.304 00:17:48.304 Active Namespaces 00:17:48.304 ================= 00:17:48.304 Discovery Log Page 00:17:48.304 ================== 00:17:48.304 Generation Counter: 2 00:17:48.304 Number of Records: 2 00:17:48.304 Record Format: 0 00:17:48.304 00:17:48.304 Discovery Log Entry 0 00:17:48.304 ---------------------- 00:17:48.304 Transport Type: 1 (RDMA) 00:17:48.304 Address Family: 1 (IPv4) 00:17:48.304 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:48.304 Entry Flags: 00:17:48.304 Duplicate Returned Information: 1 00:17:48.304 Explicit Persistent Connection Support for Discovery: 1 00:17:48.304 Transport Requirements: 00:17:48.304 Secure Channel: Not Required 00:17:48.304 Port ID: 0 (0x0000) 00:17:48.304 Controller ID: 65535 (0xffff) 00:17:48.304 Admin Max SQ Size: 128 00:17:48.304 Transport Service Identifier: 4420 00:17:48.304 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:48.304 Transport Address: 192.168.100.8 00:17:48.304 Transport Specific Address Subtype - RDMA 00:17:48.304 RDMA QP Service Type: 1 (Reliable Connected) 00:17:48.304 RDMA Provider Type: 1 (No provider specified) 00:17:48.304 RDMA CM Service: 1 (RDMA_CM) 00:17:48.304 Discovery Log Entry 1 00:17:48.304 ---------------------- 00:17:48.304 Transport Type: 1 (RDMA) 00:17:48.304 Address Family: 1 (IPv4) 00:17:48.304 Subsystem Type: 2 (NVM Subsystem) 00:17:48.304 Entry Flags: 00:17:48.304 Duplicate Returned Information: 0 00:17:48.304 Explicit Persistent Connection Support for Discovery: 0 00:17:48.304 Transport Requirements: 00:17:48.304 Secure Channel: Not Required 00:17:48.304 Port ID: 0 (0x0000) 00:17:48.304 Controller ID: 65535 (0xffff) 00:17:48.304 Admin Max SQ Size: [2024-04-18 13:46:51.002736] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:48.304 [2024-04-18 13:46:51.002756] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42684 doesn't match qid 00:17:48.304 [2024-04-18 13:46:51.002778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32627 cdw0:5 sqhd:d790 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.002790] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42684 doesn't match qid 00:17:48.304 [2024-04-18 13:46:51.002803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32627 cdw0:5 sqhd:d790 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.002813] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42684 doesn't match qid 00:17:48.304 [2024-04-18 13:46:51.002826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32627 cdw0:5 sqhd:d790 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.002836] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42684 doesn't match qid 00:17:48.304 [2024-04-18 13:46:51.002849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32627 cdw0:5 sqhd:d790 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.002864] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183900 00:17:48.304 [2024-04-18 13:46:51.002877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.304 [2024-04-18 13:46:51.002903] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.304 [2024-04-18 13:46:51.002913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.002931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.304 [2024-04-18 13:46:51.002957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.304 [2024-04-18 13:46:51.002969] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183900 00:17:48.304 [2024-04-18 13:46:51.002989] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.304 [2024-04-18 13:46:51.002999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:48.304 [2024-04-18 13:46:51.003009] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:48.304 [2024-04-18 13:46:51.003018] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:48.305 [2024-04-18 13:46:51.003027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003040] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003081] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003122] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003156] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003177] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003192] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003226] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003248] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003261] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003319] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003332] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003367] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003386] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003399] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003451] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003465] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003498] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003521] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003536] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003572] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003591] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003605] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003642] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003661] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003675] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003710] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003729] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003742] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003779] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003799] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003813] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003845] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003865] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003878] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.003915] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.003929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.003947] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.003977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.004000] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.004009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.004019] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004032] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.004065] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.004074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.004083] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004097] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.004131] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.004141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.004150] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004163] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.004200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.305 [2024-04-18 13:46:51.004210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:48.305 [2024-04-18 13:46:51.004219] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004232] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.305 [2024-04-18 13:46:51.004245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.305 [2024-04-18 13:46:51.004269] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004288] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004301] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004336] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004360] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004374] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004412] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004430] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004444] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004477] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004496] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004509] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004544] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004563] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004576] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004609] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004641] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004678] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004697] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004710] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004742] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004762] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004809] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004828] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004841] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.004883] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.004892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.004901] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004915] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.004927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.008950] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.008978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.008994] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.009008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.306 [2024-04-18 13:46:51.009038] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.306 [2024-04-18 13:46:51.009048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000e p:0 m:0 dnr:0 00:17:48.306 [2024-04-18 13:46:51.009057] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183900 00:17:48.306 [2024-04-18 13:46:51.009068] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:17:48.306 128 00:17:48.306 Transport Service Identifier: 4420 00:17:48.306 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:48.306 Transport Address: 192.168.100.8 00:17:48.306 Transport Specific Address Subtype - RDMA 00:17:48.306 RDMA QP Service Type: 1 (Reliable Connected) 00:17:48.306 RDMA Provider Type: 1 (No provider specified) 00:17:48.306 RDMA CM Service: 1 (RDMA_CM) 00:17:48.306 13:46:51 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:48.306 [2024-04-18 13:46:51.101338] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:48.306 [2024-04-18 13:46:51.101435] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172512 ] 00:17:48.566 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.566 [2024-04-18 13:46:51.171239] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:48.566 [2024-04-18 13:46:51.171333] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:48.566 [2024-04-18 13:46:51.171356] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:48.566 [2024-04-18 13:46:51.171364] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:48.566 [2024-04-18 13:46:51.171398] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:48.566 [2024-04-18 13:46:51.182572] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:48.566 [2024-04-18 13:46:51.199667] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:48.566 [2024-04-18 13:46:51.199684] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:48.566 [2024-04-18 13:46:51.199696] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199706] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199715] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199725] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199734] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199743] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199761] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199770] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199779] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199788] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199797] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199806] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199815] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199824] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199833] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199842] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199851] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199860] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199874] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199884] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199894] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199903] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199912] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199921] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199930] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199946] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199957] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199966] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199975] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199984] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.199993] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:48.566 [2024-04-18 13:46:51.200001] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:48.566 [2024-04-18 13:46:51.200008] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:48.566 [2024-04-18 13:46:51.200033] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.200053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x183900 00:17:48.566 [2024-04-18 13:46:51.206948] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.206967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.206978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.206989] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:48.566 [2024-04-18 13:46:51.207001] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:48.566 [2024-04-18 13:46:51.207011] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:48.566 [2024-04-18 13:46:51.207031] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.566 [2024-04-18 13:46:51.207073] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.207084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.207097] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:48.566 [2024-04-18 13:46:51.207107] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207118] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:48.566 [2024-04-18 13:46:51.207131] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.566 [2024-04-18 13:46:51.207168] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.207177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.207188] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:48.566 [2024-04-18 13:46:51.207197] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207208] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207220] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.566 [2024-04-18 13:46:51.207253] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.207263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.207273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207282] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207295] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.566 [2024-04-18 13:46:51.207329] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.207338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.207348] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:48.566 [2024-04-18 13:46:51.207356] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207365] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207485] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:48.566 [2024-04-18 13:46:51.207493] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207507] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.566 [2024-04-18 13:46:51.207519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.566 [2024-04-18 13:46:51.207538] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.566 [2024-04-18 13:46:51.207548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:48.566 [2024-04-18 13:46:51.207558] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:48.566 [2024-04-18 13:46:51.207567] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207585] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.207622] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.207632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.207641] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:48.567 [2024-04-18 13:46:51.207650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.207659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207670] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:48.567 [2024-04-18 13:46:51.207688] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.207705] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.207774] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.207784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.207798] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:48.567 [2024-04-18 13:46:51.207807] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:48.567 [2024-04-18 13:46:51.207815] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:48.567 [2024-04-18 13:46:51.207827] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:48.567 [2024-04-18 13:46:51.207837] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:48.567 [2024-04-18 13:46:51.207846] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.207855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.207879] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.207918] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.207928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.207949] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.567 [2024-04-18 13:46:51.207974] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.207988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.567 [2024-04-18 13:46:51.208000] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.567 [2024-04-18 13:46:51.208021] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.567 [2024-04-18 13:46:51.208040] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208049] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208067] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208080] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.208119] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208138] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:48.567 [2024-04-18 13:46:51.208148] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208156] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208168] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208179] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208191] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.208236] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208311] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208322] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208337] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.208400] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208430] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:48.567 [2024-04-18 13:46:51.208447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208457] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208471] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208487] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.208543] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208576] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208587] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208601] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208617] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.208662] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208697] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208708] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208755] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:48.567 [2024-04-18 13:46:51.208763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:48.567 [2024-04-18 13:46:51.208773] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:48.567 [2024-04-18 13:46:51.208793] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.208823] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.567 [2024-04-18 13:46:51.208852] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208873] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208884] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208902] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208916] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.208963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.208975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.208985] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.208999] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.209037] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209057] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209070] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.567 [2024-04-18 13:46:51.209106] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209125] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209143] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.209172] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.209202] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.209230] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183900 00:17:48.567 [2024-04-18 13:46:51.209257] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209291] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209303] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209326] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209338] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209357] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183900 00:17:48.567 [2024-04-18 13:46:51.209367] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.567 [2024-04-18 13:46:51.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:48.567 [2024-04-18 13:46:51.209392] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183900 00:17:48.567 ===================================================== 00:17:48.567 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.567 ===================================================== 00:17:48.567 Controller Capabilities/Features 00:17:48.567 ================================ 00:17:48.567 Vendor ID: 8086 00:17:48.567 Subsystem Vendor ID: 8086 00:17:48.567 Serial Number: SPDK00000000000001 00:17:48.567 Model Number: SPDK bdev Controller 00:17:48.567 Firmware Version: 24.05 00:17:48.567 Recommended Arb Burst: 6 00:17:48.567 IEEE OUI Identifier: e4 d2 5c 00:17:48.567 Multi-path I/O 00:17:48.567 May have multiple subsystem ports: Yes 00:17:48.567 May have multiple controllers: Yes 00:17:48.567 Associated with SR-IOV VF: No 00:17:48.567 Max Data Transfer Size: 131072 00:17:48.567 Max Number of Namespaces: 32 00:17:48.567 Max Number of I/O Queues: 127 00:17:48.567 NVMe Specification Version (VS): 1.3 00:17:48.567 NVMe Specification Version (Identify): 1.3 00:17:48.567 Maximum Queue Entries: 128 00:17:48.567 Contiguous Queues Required: Yes 00:17:48.567 Arbitration Mechanisms Supported 00:17:48.567 Weighted Round Robin: Not Supported 00:17:48.567 Vendor Specific: Not Supported 00:17:48.567 Reset Timeout: 15000 ms 00:17:48.567 Doorbell Stride: 4 bytes 00:17:48.567 NVM Subsystem Reset: Not Supported 00:17:48.567 Command Sets Supported 00:17:48.567 NVM Command Set: Supported 00:17:48.567 Boot Partition: Not Supported 00:17:48.567 Memory Page Size Minimum: 4096 bytes 00:17:48.567 Memory Page Size Maximum: 4096 bytes 00:17:48.567 Persistent Memory Region: Not Supported 00:17:48.567 Optional Asynchronous Events Supported 00:17:48.567 Namespace Attribute Notices: Supported 00:17:48.567 Firmware Activation Notices: Not Supported 00:17:48.567 ANA Change Notices: Not Supported 00:17:48.567 PLE Aggregate Log Change Notices: Not Supported 00:17:48.567 LBA Status Info Alert Notices: Not Supported 00:17:48.567 EGE Aggregate Log Change Notices: Not Supported 00:17:48.567 Normal NVM Subsystem Shutdown event: Not Supported 00:17:48.567 Zone Descriptor Change Notices: Not Supported 00:17:48.567 Discovery Log Change Notices: Not Supported 00:17:48.567 Controller Attributes 00:17:48.567 128-bit Host Identifier: Supported 00:17:48.567 Non-Operational Permissive Mode: Not Supported 00:17:48.567 NVM Sets: Not Supported 00:17:48.567 Read Recovery Levels: Not Supported 00:17:48.567 Endurance Groups: Not Supported 00:17:48.567 Predictable Latency Mode: Not Supported 00:17:48.567 Traffic Based Keep ALive: Not Supported 00:17:48.567 Namespace Granularity: Not Supported 00:17:48.567 SQ Associations: Not Supported 00:17:48.567 UUID List: Not Supported 00:17:48.567 Multi-Domain Subsystem: Not Supported 00:17:48.567 Fixed Capacity Management: Not Supported 00:17:48.567 Variable Capacity Management: Not Supported 00:17:48.567 Delete Endurance Group: Not Supported 00:17:48.567 Delete NVM Set: Not Supported 00:17:48.567 Extended LBA Formats Supported: Not Supported 00:17:48.567 Flexible Data Placement Supported: Not Supported 00:17:48.567 00:17:48.567 Controller Memory Buffer Support 00:17:48.567 ================================ 00:17:48.567 Supported: No 00:17:48.567 00:17:48.567 Persistent Memory Region Support 00:17:48.567 ================================ 00:17:48.567 Supported: No 00:17:48.567 00:17:48.567 Admin Command Set Attributes 00:17:48.567 ============================ 00:17:48.568 Security Send/Receive: Not Supported 00:17:48.568 Format NVM: Not Supported 00:17:48.568 Firmware Activate/Download: Not Supported 00:17:48.568 Namespace Management: Not Supported 00:17:48.568 Device Self-Test: Not Supported 00:17:48.568 Directives: Not Supported 00:17:48.568 NVMe-MI: Not Supported 00:17:48.568 Virtualization Management: Not Supported 00:17:48.568 Doorbell Buffer Config: Not Supported 00:17:48.568 Get LBA Status Capability: Not Supported 00:17:48.568 Command & Feature Lockdown Capability: Not Supported 00:17:48.568 Abort Command Limit: 4 00:17:48.568 Async Event Request Limit: 4 00:17:48.568 Number of Firmware Slots: N/A 00:17:48.568 Firmware Slot 1 Read-Only: N/A 00:17:48.568 Firmware Activation Without Reset: N/A 00:17:48.568 Multiple Update Detection Support: N/A 00:17:48.568 Firmware Update Granularity: No Information Provided 00:17:48.568 Per-Namespace SMART Log: No 00:17:48.568 Asymmetric Namespace Access Log Page: Not Supported 00:17:48.568 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:48.568 Command Effects Log Page: Supported 00:17:48.568 Get Log Page Extended Data: Supported 00:17:48.568 Telemetry Log Pages: Not Supported 00:17:48.568 Persistent Event Log Pages: Not Supported 00:17:48.568 Supported Log Pages Log Page: May Support 00:17:48.568 Commands Supported & Effects Log Page: Not Supported 00:17:48.568 Feature Identifiers & Effects Log Page:May Support 00:17:48.568 NVMe-MI Commands & Effects Log Page: May Support 00:17:48.568 Data Area 4 for Telemetry Log: Not Supported 00:17:48.568 Error Log Page Entries Supported: 128 00:17:48.568 Keep Alive: Supported 00:17:48.568 Keep Alive Granularity: 10000 ms 00:17:48.568 00:17:48.568 NVM Command Set Attributes 00:17:48.568 ========================== 00:17:48.568 Submission Queue Entry Size 00:17:48.568 Max: 64 00:17:48.568 Min: 64 00:17:48.568 Completion Queue Entry Size 00:17:48.568 Max: 16 00:17:48.568 Min: 16 00:17:48.568 Number of Namespaces: 32 00:17:48.568 Compare Command: Supported 00:17:48.568 Write Uncorrectable Command: Not Supported 00:17:48.568 Dataset Management Command: Supported 00:17:48.568 Write Zeroes Command: Supported 00:17:48.568 Set Features Save Field: Not Supported 00:17:48.568 Reservations: Supported 00:17:48.568 Timestamp: Not Supported 00:17:48.568 Copy: Supported 00:17:48.568 Volatile Write Cache: Present 00:17:48.568 Atomic Write Unit (Normal): 1 00:17:48.568 Atomic Write Unit (PFail): 1 00:17:48.568 Atomic Compare & Write Unit: 1 00:17:48.568 Fused Compare & Write: Supported 00:17:48.568 Scatter-Gather List 00:17:48.568 SGL Command Set: Supported 00:17:48.568 SGL Keyed: Supported 00:17:48.568 SGL Bit Bucket Descriptor: Not Supported 00:17:48.568 SGL Metadata Pointer: Not Supported 00:17:48.568 Oversized SGL: Not Supported 00:17:48.568 SGL Metadata Address: Not Supported 00:17:48.568 SGL Offset: Supported 00:17:48.568 Transport SGL Data Block: Not Supported 00:17:48.568 Replay Protected Memory Block: Not Supported 00:17:48.568 00:17:48.568 Firmware Slot Information 00:17:48.568 ========================= 00:17:48.568 Active slot: 1 00:17:48.568 Slot 1 Firmware Revision: 24.05 00:17:48.568 00:17:48.568 00:17:48.568 Commands Supported and Effects 00:17:48.568 ============================== 00:17:48.568 Admin Commands 00:17:48.568 -------------- 00:17:48.568 Get Log Page (02h): Supported 00:17:48.568 Identify (06h): Supported 00:17:48.568 Abort (08h): Supported 00:17:48.568 Set Features (09h): Supported 00:17:48.568 Get Features (0Ah): Supported 00:17:48.568 Asynchronous Event Request (0Ch): Supported 00:17:48.568 Keep Alive (18h): Supported 00:17:48.568 I/O Commands 00:17:48.568 ------------ 00:17:48.568 Flush (00h): Supported LBA-Change 00:17:48.568 Write (01h): Supported LBA-Change 00:17:48.568 Read (02h): Supported 00:17:48.568 Compare (05h): Supported 00:17:48.568 Write Zeroes (08h): Supported LBA-Change 00:17:48.568 Dataset Management (09h): Supported LBA-Change 00:17:48.568 Copy (19h): Supported LBA-Change 00:17:48.568 Unknown (79h): Supported LBA-Change 00:17:48.568 Unknown (7Ah): Supported 00:17:48.568 00:17:48.568 Error Log 00:17:48.568 ========= 00:17:48.568 00:17:48.568 Arbitration 00:17:48.568 =========== 00:17:48.568 Arbitration Burst: 1 00:17:48.568 00:17:48.568 Power Management 00:17:48.568 ================ 00:17:48.568 Number of Power States: 1 00:17:48.568 Current Power State: Power State #0 00:17:48.568 Power State #0: 00:17:48.568 Max Power: 0.00 W 00:17:48.568 Non-Operational State: Operational 00:17:48.568 Entry Latency: Not Reported 00:17:48.568 Exit Latency: Not Reported 00:17:48.568 Relative Read Throughput: 0 00:17:48.568 Relative Read Latency: 0 00:17:48.568 Relative Write Throughput: 0 00:17:48.568 Relative Write Latency: 0 00:17:48.568 Idle Power: Not Reported 00:17:48.568 Active Power: Not Reported 00:17:48.568 Non-Operational Permissive Mode: Not Supported 00:17:48.568 00:17:48.568 Health Information 00:17:48.568 ================== 00:17:48.568 Critical Warnings: 00:17:48.568 Available Spare Space: OK 00:17:48.568 Temperature: OK 00:17:48.568 Device Reliability: OK 00:17:48.568 Read Only: No 00:17:48.568 Volatile Memory Backup: OK 00:17:48.568 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:48.568 Temperature Threshold: [2024-04-18 13:46:51.209512] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.209558] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.209569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209579] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209616] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:48.568 [2024-04-18 13:46:51.209633] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14901 doesn't match qid 00:17:48.568 [2024-04-18 13:46:51.209654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32601 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209665] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14901 doesn't match qid 00:17:48.568 [2024-04-18 13:46:51.209679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32601 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209689] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14901 doesn't match qid 00:17:48.568 [2024-04-18 13:46:51.209702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32601 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209712] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14901 doesn't match qid 00:17:48.568 [2024-04-18 13:46:51.209729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32601 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209744] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.209782] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209806] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.209829] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209856] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.209867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209876] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:48.568 [2024-04-18 13:46:51.209886] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:48.568 [2024-04-18 13:46:51.209895] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209908] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.209947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.209959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.209970] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209985] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.209998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210028] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210049] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210064] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210098] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210119] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210133] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210171] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210190] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210204] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210237] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210256] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210271] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210304] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210323] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210336] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210369] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210388] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210402] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210434] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210453] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210503] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210522] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210536] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210572] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210591] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210605] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210640] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210673] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210710] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210729] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210743] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210780] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210798] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210812] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.568 [2024-04-18 13:46:51.210846] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.568 [2024-04-18 13:46:51.210856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:48.568 [2024-04-18 13:46:51.210865] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183900 00:17:48.568 [2024-04-18 13:46:51.210879] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.569 [2024-04-18 13:46:51.210891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.569 [2024-04-18 13:46:51.210914] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.569 [2024-04-18 13:46:51.210923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:48.569 [2024-04-18 13:46:51.210933] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183900 00:17:48.569 [2024-04-18 13:46:51.214969] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183900 00:17:48.569 [2024-04-18 13:46:51.214987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:48.569 [2024-04-18 13:46:51.215012] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:48.569 [2024-04-18 13:46:51.215022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:17:48.569 [2024-04-18 13:46:51.215032] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183900 00:17:48.569 [2024-04-18 13:46:51.215043] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:17:48.569 0 Kelvin (-273 Celsius) 00:17:48.569 Available Spare: 0% 00:17:48.569 Available Spare Threshold: 0% 00:17:48.569 Life Percentage Used: 0% 00:17:48.569 Data Units Read: 0 00:17:48.569 Data Units Written: 0 00:17:48.569 Host Read Commands: 0 00:17:48.569 Host Write Commands: 0 00:17:48.569 Controller Busy Time: 0 minutes 00:17:48.569 Power Cycles: 0 00:17:48.569 Power On Hours: 0 hours 00:17:48.569 Unsafe Shutdowns: 0 00:17:48.569 Unrecoverable Media Errors: 0 00:17:48.569 Lifetime Error Log Entries: 0 00:17:48.569 Warning Temperature Time: 0 minutes 00:17:48.569 Critical Temperature Time: 0 minutes 00:17:48.569 00:17:48.569 Number of Queues 00:17:48.569 ================ 00:17:48.569 Number of I/O Submission Queues: 127 00:17:48.569 Number of I/O Completion Queues: 127 00:17:48.569 00:17:48.569 Active Namespaces 00:17:48.569 ================= 00:17:48.569 Namespace ID:1 00:17:48.569 Error Recovery Timeout: Unlimited 00:17:48.569 Command Set Identifier: NVM (00h) 00:17:48.569 Deallocate: Supported 00:17:48.569 Deallocated/Unwritten Error: Not Supported 00:17:48.569 Deallocated Read Value: Unknown 00:17:48.569 Deallocate in Write Zeroes: Not Supported 00:17:48.569 Deallocated Guard Field: 0xFFFF 00:17:48.569 Flush: Supported 00:17:48.569 Reservation: Supported 00:17:48.569 Namespace Sharing Capabilities: Multiple Controllers 00:17:48.569 Size (in LBAs): 131072 (0GiB) 00:17:48.569 Capacity (in LBAs): 131072 (0GiB) 00:17:48.569 Utilization (in LBAs): 131072 (0GiB) 00:17:48.569 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:48.569 EUI64: ABCDEF0123456789 00:17:48.569 UUID: c6c6d399-2b1f-4b18-a078-9ba15bddc1b9 00:17:48.569 Thin Provisioning: Not Supported 00:17:48.569 Per-NS Atomic Units: Yes 00:17:48.569 Atomic Boundary Size (Normal): 0 00:17:48.569 Atomic Boundary Size (PFail): 0 00:17:48.569 Atomic Boundary Offset: 0 00:17:48.569 Maximum Single Source Range Length: 65535 00:17:48.569 Maximum Copy Length: 65535 00:17:48.569 Maximum Source Range Count: 1 00:17:48.569 NGUID/EUI64 Never Reused: No 00:17:48.569 Namespace Write Protected: No 00:17:48.569 Number of LBA Formats: 1 00:17:48.569 Current LBA Format: LBA Format #00 00:17:48.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:48.569 00:17:48.569 13:46:51 -- host/identify.sh@51 -- # sync 00:17:48.569 13:46:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.569 13:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.569 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 13:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.569 13:46:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:48.569 13:46:51 -- host/identify.sh@56 -- # nvmftestfini 00:17:48.569 13:46:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:48.569 13:46:51 -- nvmf/common.sh@117 -- # sync 00:17:48.569 13:46:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:48.569 13:46:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:48.569 13:46:51 -- nvmf/common.sh@120 -- # set +e 00:17:48.569 13:46:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.569 13:46:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:48.569 rmmod nvme_rdma 00:17:48.569 rmmod nvme_fabrics 00:17:48.569 13:46:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.569 13:46:51 -- nvmf/common.sh@124 -- # set -e 00:17:48.569 13:46:51 -- nvmf/common.sh@125 -- # return 0 00:17:48.569 13:46:51 -- nvmf/common.sh@478 -- # '[' -n 1172269 ']' 00:17:48.569 13:46:51 -- nvmf/common.sh@479 -- # killprocess 1172269 00:17:48.569 13:46:51 -- common/autotest_common.sh@936 -- # '[' -z 1172269 ']' 00:17:48.569 13:46:51 -- common/autotest_common.sh@940 -- # kill -0 1172269 00:17:48.569 13:46:51 -- common/autotest_common.sh@941 -- # uname 00:17:48.569 13:46:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.569 13:46:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1172269 00:17:48.569 13:46:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:48.569 13:46:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:48.569 13:46:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1172269' 00:17:48.569 killing process with pid 1172269 00:17:48.569 13:46:51 -- common/autotest_common.sh@955 -- # kill 1172269 00:17:48.569 [2024-04-18 13:46:51.345820] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:48.569 13:46:51 -- common/autotest_common.sh@960 -- # wait 1172269 00:17:49.134 13:46:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:49.134 13:46:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:49.134 00:17:49.134 real 0m5.036s 00:17:49.134 user 0m8.796s 00:17:49.134 sys 0m2.471s 00:17:49.134 13:46:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:49.134 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.134 ************************************ 00:17:49.134 END TEST nvmf_identify 00:17:49.134 ************************************ 00:17:49.134 13:46:51 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:49.134 13:46:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:49.134 13:46:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:49.134 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.134 ************************************ 00:17:49.134 START TEST nvmf_perf 00:17:49.134 ************************************ 00:17:49.134 13:46:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:49.391 * Looking for test storage... 00:17:49.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:49.391 13:46:51 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.391 13:46:51 -- nvmf/common.sh@7 -- # uname -s 00:17:49.391 13:46:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.391 13:46:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.391 13:46:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.391 13:46:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.391 13:46:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.391 13:46:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.391 13:46:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.391 13:46:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.391 13:46:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.391 13:46:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.391 13:46:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:49.391 13:46:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:49.391 13:46:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.391 13:46:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.391 13:46:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.391 13:46:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.391 13:46:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:49.391 13:46:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.391 13:46:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.391 13:46:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.391 13:46:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.391 13:46:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.391 13:46:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.391 13:46:51 -- paths/export.sh@5 -- # export PATH 00:17:49.391 13:46:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.391 13:46:51 -- nvmf/common.sh@47 -- # : 0 00:17:49.391 13:46:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.391 13:46:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.391 13:46:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.391 13:46:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.391 13:46:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.391 13:46:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.391 13:46:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.391 13:46:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.391 13:46:51 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:49.391 13:46:51 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:49.391 13:46:51 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:49.391 13:46:51 -- host/perf.sh@17 -- # nvmftestinit 00:17:49.391 13:46:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:49.391 13:46:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.391 13:46:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:49.391 13:46:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:49.391 13:46:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:49.392 13:46:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.392 13:46:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.392 13:46:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.392 13:46:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:49.392 13:46:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:49.392 13:46:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:49.392 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:51.917 13:46:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.917 13:46:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.917 13:46:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.917 13:46:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.917 13:46:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.917 13:46:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.917 13:46:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.917 13:46:54 -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.917 13:46:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.917 13:46:54 -- nvmf/common.sh@296 -- # e810=() 00:17:51.917 13:46:54 -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.917 13:46:54 -- nvmf/common.sh@297 -- # x722=() 00:17:51.917 13:46:54 -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.917 13:46:54 -- nvmf/common.sh@298 -- # mlx=() 00:17:51.917 13:46:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.917 13:46:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.917 13:46:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.918 13:46:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.918 13:46:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.918 13:46:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:51.918 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:51.918 13:46:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.918 13:46:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:51.918 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:51.918 13:46:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.918 13:46:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.918 13:46:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.918 13:46:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:51.918 Found net devices under 0000:81:00.0: mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.918 13:46:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.918 13:46:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:51.918 Found net devices under 0000:81:00.1: mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.918 13:46:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:51.918 13:46:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:51.918 13:46:54 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:51.918 13:46:54 -- nvmf/common.sh@58 -- # uname 00:17:51.918 13:46:54 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:51.918 13:46:54 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:51.918 13:46:54 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:51.918 13:46:54 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:51.918 13:46:54 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:51.918 13:46:54 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:51.918 13:46:54 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:51.918 13:46:54 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:51.918 13:46:54 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:51.918 13:46:54 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:51.918 13:46:54 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:51.918 13:46:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.918 13:46:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.918 13:46:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.918 13:46:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.918 13:46:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@105 -- # continue 2 00:17:51.918 13:46:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@105 -- # continue 2 00:17:51.918 13:46:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.918 13:46:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.918 13:46:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:51.918 13:46:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:51.918 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.918 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:51.918 altname enp129s0f0np0 00:17:51.918 inet 192.168.100.8/24 scope global mlx_0_0 00:17:51.918 valid_lft forever preferred_lft forever 00:17:51.918 13:46:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.918 13:46:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.918 13:46:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:51.918 13:46:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:51.918 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.918 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:51.918 altname enp129s0f1np1 00:17:51.918 inet 192.168.100.9/24 scope global mlx_0_1 00:17:51.918 valid_lft forever preferred_lft forever 00:17:51.918 13:46:54 -- nvmf/common.sh@411 -- # return 0 00:17:51.918 13:46:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:51.918 13:46:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:51.918 13:46:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:51.918 13:46:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:51.918 13:46:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.918 13:46:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.918 13:46:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.918 13:46:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.918 13:46:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.918 13:46:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@105 -- # continue 2 00:17:51.918 13:46:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.918 13:46:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.918 13:46:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@105 -- # continue 2 00:17:51.918 13:46:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.918 13:46:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.918 13:46:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.918 13:46:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.918 13:46:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.918 13:46:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:51.918 192.168.100.9' 00:17:51.918 13:46:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:51.918 192.168.100.9' 00:17:51.918 13:46:54 -- nvmf/common.sh@446 -- # head -n 1 00:17:51.918 13:46:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:51.918 13:46:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:51.918 192.168.100.9' 00:17:51.918 13:46:54 -- nvmf/common.sh@447 -- # tail -n +2 00:17:51.919 13:46:54 -- nvmf/common.sh@447 -- # head -n 1 00:17:51.919 13:46:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:51.919 13:46:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:51.919 13:46:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:51.919 13:46:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:51.919 13:46:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:51.919 13:46:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:51.919 13:46:54 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:51.919 13:46:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:51.919 13:46:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:51.919 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:17:51.919 13:46:54 -- nvmf/common.sh@470 -- # nvmfpid=1174588 00:17:51.919 13:46:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.919 13:46:54 -- nvmf/common.sh@471 -- # waitforlisten 1174588 00:17:51.919 13:46:54 -- common/autotest_common.sh@817 -- # '[' -z 1174588 ']' 00:17:51.919 13:46:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.919 13:46:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:51.919 13:46:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.919 13:46:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:51.919 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:17:51.919 [2024-04-18 13:46:54.707179] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:51.919 [2024-04-18 13:46:54.707288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.176 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.176 [2024-04-18 13:46:54.793676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.176 [2024-04-18 13:46:54.919227] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.177 [2024-04-18 13:46:54.919292] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.177 [2024-04-18 13:46:54.919308] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.177 [2024-04-18 13:46:54.919330] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.177 [2024-04-18 13:46:54.919343] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.177 [2024-04-18 13:46:54.919444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.177 [2024-04-18 13:46:54.919517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.177 [2024-04-18 13:46:54.919567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.177 [2024-04-18 13:46:54.919570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.434 13:46:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.434 13:46:55 -- common/autotest_common.sh@850 -- # return 0 00:17:52.434 13:46:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:52.434 13:46:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:52.434 13:46:55 -- common/autotest_common.sh@10 -- # set +x 00:17:52.434 13:46:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.434 13:46:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:52.434 13:46:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:55.917 13:46:58 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:55.917 13:46:58 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:55.917 13:46:58 -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:17:55.917 13:46:58 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:56.175 13:46:58 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:56.175 13:46:58 -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:17:56.175 13:46:58 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:56.175 13:46:58 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:17:56.175 13:46:58 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:17:56.740 [2024-04-18 13:46:59.260383] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:17:56.740 [2024-04-18 13:46:59.285294] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19d04e0/0x19de240) succeed. 00:17:56.740 [2024-04-18 13:46:59.297865] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19d1ad0/0x1a5e2c0) succeed. 00:17:56.740 13:46:59 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.997 13:46:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:56.997 13:46:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.561 13:47:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:57.561 13:47:00 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:57.818 13:47:00 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:58.077 [2024-04-18 13:47:00.730814] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:58.077 13:47:00 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:58.334 13:47:01 -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:17:58.334 13:47:01 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:58.334 13:47:01 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:58.334 13:47:01 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:59.711 Initializing NVMe Controllers 00:17:59.711 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:59.711 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:17:59.711 Initialization complete. Launching workers. 00:17:59.711 ======================================================== 00:17:59.712 Latency(us) 00:17:59.712 Device Information : IOPS MiB/s Average min max 00:17:59.712 PCIE (0000:84:00.0) NSID 1 from core 0: 74077.21 289.36 431.24 43.75 4484.01 00:17:59.712 ======================================================== 00:17:59.712 Total : 74077.21 289.36 431.24 43.75 4484.01 00:17:59.712 00:17:59.712 13:47:02 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:59.712 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.988 Initializing NVMe Controllers 00:18:02.988 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.988 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.988 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.988 Initialization complete. Launching workers. 00:18:02.988 ======================================================== 00:18:02.988 Latency(us) 00:18:02.988 Device Information : IOPS MiB/s Average min max 00:18:02.988 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5109.00 19.96 195.45 75.51 5070.70 00:18:02.988 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4144.00 16.19 241.04 97.77 5067.09 00:18:02.988 ======================================================== 00:18:02.988 Total : 9253.00 36.14 215.87 75.51 5070.70 00:18:02.988 00:18:02.988 13:47:05 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:02.988 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.168 Initializing NVMe Controllers 00:18:07.168 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.168 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.168 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:07.168 Initialization complete. Launching workers. 00:18:07.168 ======================================================== 00:18:07.168 Latency(us) 00:18:07.168 Device Information : IOPS MiB/s Average min max 00:18:07.168 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13313.97 52.01 2409.97 683.54 6217.07 00:18:07.168 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.99 15.75 7970.64 6835.70 8310.11 00:18:07.168 ======================================================== 00:18:07.168 Total : 17345.97 67.76 3702.52 683.54 8310.11 00:18:07.168 00:18:07.168 13:47:09 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:07.168 13:47:09 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:07.168 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.349 Initializing NVMe Controllers 00:18:11.349 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.349 Controller IO queue size 128, less than required. 00:18:11.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.349 Controller IO queue size 128, less than required. 00:18:11.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.349 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.349 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:11.349 Initialization complete. Launching workers. 00:18:11.349 ======================================================== 00:18:11.349 Latency(us) 00:18:11.349 Device Information : IOPS MiB/s Average min max 00:18:11.349 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2771.92 692.98 46263.52 19038.03 115200.97 00:18:11.349 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2357.33 589.33 53716.99 9914.31 165098.08 00:18:11.349 ======================================================== 00:18:11.349 Total : 5129.25 1282.31 49689.03 9914.31 165098.08 00:18:11.349 00:18:11.349 13:47:13 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:18:11.349 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.349 No valid NVMe controllers or AIO or URING devices found 00:18:11.349 Initializing NVMe Controllers 00:18:11.349 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.349 Controller IO queue size 128, less than required. 00:18:11.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.350 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:11.350 Controller IO queue size 128, less than required. 00:18:11.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.350 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:11.350 WARNING: Some requested NVMe devices were skipped 00:18:11.350 13:47:14 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:18:11.350 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.609 Initializing NVMe Controllers 00:18:16.609 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.609 Controller IO queue size 128, less than required. 00:18:16.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:16.609 Controller IO queue size 128, less than required. 00:18:16.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:16.609 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:16.609 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:16.609 Initialization complete. Launching workers. 00:18:16.609 00:18:16.609 ==================== 00:18:16.609 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:16.609 RDMA transport: 00:18:16.609 dev name: mlx5_0 00:18:16.609 polls: 287980 00:18:16.609 idle_polls: 285524 00:18:16.609 completions: 31050 00:18:16.609 queued_requests: 1 00:18:16.609 total_send_wrs: 15525 00:18:16.609 send_doorbell_updates: 2250 00:18:16.609 total_recv_wrs: 15652 00:18:16.609 recv_doorbell_updates: 2252 00:18:16.609 --------------------------------- 00:18:16.609 00:18:16.609 ==================== 00:18:16.609 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:16.609 RDMA transport: 00:18:16.609 dev name: mlx5_0 00:18:16.609 polls: 294982 00:18:16.609 idle_polls: 294708 00:18:16.609 completions: 15590 00:18:16.609 queued_requests: 1 00:18:16.609 total_send_wrs: 7795 00:18:16.609 send_doorbell_updates: 254 00:18:16.609 total_recv_wrs: 7922 00:18:16.609 recv_doorbell_updates: 255 00:18:16.609 --------------------------------- 00:18:16.609 ======================================================== 00:18:16.609 Latency(us) 00:18:16.609 Device Information : IOPS MiB/s Average min max 00:18:16.609 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3880.49 970.12 33023.74 16131.95 76795.41 00:18:16.609 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1948.24 487.06 65707.59 31755.63 104092.55 00:18:16.609 ======================================================== 00:18:16.609 Total : 5828.73 1457.18 43948.26 16131.95 104092.55 00:18:16.609 00:18:16.609 13:47:18 -- host/perf.sh@66 -- # sync 00:18:16.609 13:47:18 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.609 13:47:18 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:16.609 13:47:18 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:16.609 13:47:18 -- host/perf.sh@114 -- # nvmftestfini 00:18:16.609 13:47:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:16.609 13:47:18 -- nvmf/common.sh@117 -- # sync 00:18:16.609 13:47:18 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:16.609 13:47:18 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:16.609 13:47:18 -- nvmf/common.sh@120 -- # set +e 00:18:16.609 13:47:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.609 13:47:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:16.609 rmmod nvme_rdma 00:18:16.609 rmmod nvme_fabrics 00:18:16.609 13:47:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.609 13:47:18 -- nvmf/common.sh@124 -- # set -e 00:18:16.609 13:47:18 -- nvmf/common.sh@125 -- # return 0 00:18:16.609 13:47:18 -- nvmf/common.sh@478 -- # '[' -n 1174588 ']' 00:18:16.609 13:47:18 -- nvmf/common.sh@479 -- # killprocess 1174588 00:18:16.609 13:47:18 -- common/autotest_common.sh@936 -- # '[' -z 1174588 ']' 00:18:16.609 13:47:18 -- common/autotest_common.sh@940 -- # kill -0 1174588 00:18:16.609 13:47:18 -- common/autotest_common.sh@941 -- # uname 00:18:16.609 13:47:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.609 13:47:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1174588 00:18:16.609 13:47:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.609 13:47:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.609 13:47:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1174588' 00:18:16.609 killing process with pid 1174588 00:18:16.609 13:47:18 -- common/autotest_common.sh@955 -- # kill 1174588 00:18:16.609 13:47:18 -- common/autotest_common.sh@960 -- # wait 1174588 00:18:17.981 13:47:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.981 13:47:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:17.981 00:18:17.981 real 0m28.770s 00:18:17.981 user 1m45.148s 00:18:17.981 sys 0m3.323s 00:18:17.981 13:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:17.981 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 ************************************ 00:18:17.981 END TEST nvmf_perf 00:18:17.981 ************************************ 00:18:17.981 13:47:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:17.981 13:47:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:17.981 13:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.981 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:18.239 ************************************ 00:18:18.239 START TEST nvmf_fio_host 00:18:18.239 ************************************ 00:18:18.239 13:47:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:18.239 * Looking for test storage... 00:18:18.239 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:18.239 13:47:20 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:18.239 13:47:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.239 13:47:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.239 13:47:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.239 13:47:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@5 -- # export PATH 00:18:18.239 13:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.239 13:47:20 -- nvmf/common.sh@7 -- # uname -s 00:18:18.239 13:47:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.239 13:47:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.239 13:47:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.239 13:47:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.239 13:47:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.239 13:47:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.239 13:47:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.239 13:47:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.239 13:47:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.239 13:47:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.239 13:47:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:18.239 13:47:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:18:18.239 13:47:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.239 13:47:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.239 13:47:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.239 13:47:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.239 13:47:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:18.239 13:47:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.239 13:47:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.239 13:47:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.239 13:47:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- paths/export.sh@5 -- # export PATH 00:18:18.239 13:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 13:47:20 -- nvmf/common.sh@47 -- # : 0 00:18:18.240 13:47:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.240 13:47:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.240 13:47:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.240 13:47:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.240 13:47:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.240 13:47:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.240 13:47:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.240 13:47:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.240 13:47:20 -- host/fio.sh@12 -- # nvmftestinit 00:18:18.240 13:47:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:18.240 13:47:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.240 13:47:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.240 13:47:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.240 13:47:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.240 13:47:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.240 13:47:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.240 13:47:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.240 13:47:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:18.240 13:47:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:18.240 13:47:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.240 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:18:21.542 13:47:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.542 13:47:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.542 13:47:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.542 13:47:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.542 13:47:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.542 13:47:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.542 13:47:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.542 13:47:23 -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.542 13:47:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.542 13:47:23 -- nvmf/common.sh@296 -- # e810=() 00:18:21.542 13:47:23 -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.542 13:47:23 -- nvmf/common.sh@297 -- # x722=() 00:18:21.542 13:47:23 -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.542 13:47:23 -- nvmf/common.sh@298 -- # mlx=() 00:18:21.542 13:47:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.542 13:47:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.542 13:47:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.542 13:47:23 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:21.542 13:47:23 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:21.542 13:47:23 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:21.542 13:47:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.542 13:47:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.542 13:47:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:18:21.542 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:18:21.542 13:47:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.542 13:47:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.542 13:47:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:18:21.542 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:18:21.542 13:47:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:21.542 13:47:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.542 13:47:23 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:21.542 13:47:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.543 13:47:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:21.543 13:47:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.543 13:47:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:18:21.543 Found net devices under 0000:81:00.0: mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.543 13:47:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.543 13:47:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:21.543 13:47:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.543 13:47:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:18:21.543 Found net devices under 0000:81:00.1: mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.543 13:47:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:21.543 13:47:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:21.543 13:47:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:21.543 13:47:23 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:21.543 13:47:23 -- nvmf/common.sh@58 -- # uname 00:18:21.543 13:47:23 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:21.543 13:47:23 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:21.543 13:47:23 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:21.543 13:47:23 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:21.543 13:47:23 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:21.543 13:47:23 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:21.543 13:47:23 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:21.543 13:47:23 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:21.543 13:47:23 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:21.543 13:47:23 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:21.543 13:47:23 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:21.543 13:47:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.543 13:47:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:21.543 13:47:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:21.543 13:47:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:21.543 13:47:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:21.543 13:47:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@105 -- # continue 2 00:18:21.543 13:47:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@105 -- # continue 2 00:18:21.543 13:47:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:21.543 13:47:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:21.543 13:47:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:21.543 13:47:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:21.543 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:21.543 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:18:21.543 altname enp129s0f0np0 00:18:21.543 inet 192.168.100.8/24 scope global mlx_0_0 00:18:21.543 valid_lft forever preferred_lft forever 00:18:21.543 13:47:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:21.543 13:47:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:21.543 13:47:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:21.543 13:47:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:21.543 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:21.543 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:18:21.543 altname enp129s0f1np1 00:18:21.543 inet 192.168.100.9/24 scope global mlx_0_1 00:18:21.543 valid_lft forever preferred_lft forever 00:18:21.543 13:47:23 -- nvmf/common.sh@411 -- # return 0 00:18:21.543 13:47:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:21.543 13:47:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:21.543 13:47:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:21.543 13:47:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:21.543 13:47:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:21.543 13:47:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:21.543 13:47:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:21.543 13:47:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:21.543 13:47:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:21.543 13:47:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@105 -- # continue 2 00:18:21.543 13:47:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:21.543 13:47:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:21.543 13:47:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@105 -- # continue 2 00:18:21.543 13:47:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:21.543 13:47:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:21.543 13:47:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:21.543 13:47:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:21.543 13:47:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:21.543 13:47:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:21.543 192.168.100.9' 00:18:21.543 13:47:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:21.543 192.168.100.9' 00:18:21.543 13:47:23 -- nvmf/common.sh@446 -- # head -n 1 00:18:21.543 13:47:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:21.543 13:47:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:21.543 192.168.100.9' 00:18:21.543 13:47:23 -- nvmf/common.sh@447 -- # tail -n +2 00:18:21.543 13:47:23 -- nvmf/common.sh@447 -- # head -n 1 00:18:21.543 13:47:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:21.543 13:47:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:21.543 13:47:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:21.543 13:47:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:21.543 13:47:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:21.543 13:47:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:21.543 13:47:23 -- host/fio.sh@14 -- # [[ y != y ]] 00:18:21.543 13:47:23 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:21.543 13:47:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.543 13:47:23 -- common/autotest_common.sh@10 -- # set +x 00:18:21.543 13:47:23 -- host/fio.sh@22 -- # nvmfpid=1179623 00:18:21.543 13:47:23 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:21.543 13:47:23 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.543 13:47:23 -- host/fio.sh@26 -- # waitforlisten 1179623 00:18:21.543 13:47:23 -- common/autotest_common.sh@817 -- # '[' -z 1179623 ']' 00:18:21.543 13:47:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.543 13:47:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.543 13:47:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.543 13:47:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.543 13:47:23 -- common/autotest_common.sh@10 -- # set +x 00:18:21.543 [2024-04-18 13:47:23.772111] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:21.543 [2024-04-18 13:47:23.772196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.543 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.543 [2024-04-18 13:47:23.850513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.543 [2024-04-18 13:47:23.973675] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.543 [2024-04-18 13:47:23.973732] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.544 [2024-04-18 13:47:23.973748] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.544 [2024-04-18 13:47:23.973761] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.544 [2024-04-18 13:47:23.973781] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.544 [2024-04-18 13:47:23.973870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.544 [2024-04-18 13:47:23.973928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.544 [2024-04-18 13:47:23.973975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.544 [2024-04-18 13:47:23.973979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.544 13:47:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.544 13:47:24 -- common/autotest_common.sh@850 -- # return 0 00:18:21.544 13:47:24 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:21.544 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.544 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.544 [2024-04-18 13:47:24.139518] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x174f090/0x1753580) succeed. 00:18:21.544 [2024-04-18 13:47:24.151843] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1750680/0x1794c10) succeed. 00:18:21.544 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.544 13:47:24 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:21.544 13:47:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.544 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 13:47:24 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:21.825 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.825 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 Malloc1 00:18:21.825 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.825 13:47:24 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:21.825 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.825 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.825 13:47:24 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.825 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.825 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.825 13:47:24 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:21.825 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.825 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 [2024-04-18 13:47:24.401904] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:21.825 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.825 13:47:24 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:21.825 13:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.825 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 13:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.825 13:47:24 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:18:21.825 13:47:24 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:21.825 13:47:24 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:21.826 13:47:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:21.826 13:47:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:21.826 13:47:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:21.826 13:47:24 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:21.826 13:47:24 -- common/autotest_common.sh@1327 -- # shift 00:18:21.826 13:47:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:21.826 13:47:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:21.826 13:47:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:21.826 13:47:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:21.826 13:47:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:21.826 13:47:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:21.826 13:47:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:21.826 13:47:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:22.090 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:22.090 fio-3.35 00:18:22.090 Starting 1 thread 00:18:22.090 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.620 00:18:24.620 test: (groupid=0, jobs=1): err= 0: pid=1179844: Thu Apr 18 13:47:27 2024 00:18:24.620 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(99.4MiB/2005msec) 00:18:24.620 slat (nsec): min=2634, max=25596, avg=2970.33, stdev=677.24 00:18:24.620 clat (usec): min=2250, max=9251, avg=5023.94, stdev=181.22 00:18:24.620 lat (usec): min=2268, max=9254, avg=5026.91, stdev=181.17 00:18:24.620 clat percentiles (usec): 00:18:24.620 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 4948], 20.00th=[ 5014], 00:18:24.620 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5014], 00:18:24.620 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5080], 95.00th=[ 5145], 00:18:24.620 | 99.00th=[ 5473], 99.50th=[ 5473], 99.90th=[ 7439], 99.95th=[ 8586], 00:18:24.620 | 99.99th=[ 9241] 00:18:24.620 bw ( KiB/s): min=49720, max=51336, per=99.94%, avg=50714.00, stdev=699.39, samples=4 00:18:24.620 iops : min=12430, max=12834, avg=12678.50, stdev=174.85, samples=4 00:18:24.620 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(99.1MiB/2005msec); 0 zone resets 00:18:24.620 slat (nsec): min=2754, max=19145, avg=3107.25, stdev=672.66 00:18:24.620 clat (usec): min=2273, max=9227, avg=5018.03, stdev=164.58 00:18:24.620 lat (usec): min=2281, max=9230, avg=5021.14, stdev=164.54 00:18:24.620 clat percentiles (usec): 00:18:24.620 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 4948], 20.00th=[ 5014], 00:18:24.620 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5014], 00:18:24.620 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5080], 95.00th=[ 5145], 00:18:24.620 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6783], 99.95th=[ 8029], 00:18:24.620 | 99.99th=[ 9110] 00:18:24.620 bw ( KiB/s): min=50040, max=50968, per=100.00%, avg=50666.00, stdev=425.35, samples=4 00:18:24.620 iops : min=12510, max=12742, avg=12666.50, stdev=106.34, samples=4 00:18:24.620 lat (msec) : 4=0.13%, 10=99.87% 00:18:24.620 cpu : usr=99.30%, sys=0.00%, ctx=16, majf=0, minf=28 00:18:24.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:24.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.620 issued rwts: total=25435,25382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.620 00:18:24.620 Run status group 0 (all jobs): 00:18:24.620 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=99.4MiB (104MB), run=2005-2005msec 00:18:24.620 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=99.1MiB (104MB), run=2005-2005msec 00:18:24.620 13:47:27 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:24.620 13:47:27 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:24.620 13:47:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:24.620 13:47:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.620 13:47:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:24.620 13:47:27 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:24.620 13:47:27 -- common/autotest_common.sh@1327 -- # shift 00:18:24.620 13:47:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:24.620 13:47:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:24.620 13:47:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:24.620 13:47:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:24.620 13:47:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:24.620 13:47:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:24.620 13:47:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:24.620 13:47:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:24.620 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:24.620 fio-3.35 00:18:24.620 Starting 1 thread 00:18:24.620 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.156 00:18:27.156 test: (groupid=0, jobs=1): err= 0: pid=1180292: Thu Apr 18 13:47:29 2024 00:18:27.156 read: IOPS=8646, BW=135MiB/s (142MB/s)(268MiB/1983msec) 00:18:27.156 slat (nsec): min=4004, max=71432, avg=6276.88, stdev=2929.98 00:18:27.156 clat (usec): min=421, max=19924, avg=5924.26, stdev=4261.18 00:18:27.156 lat (usec): min=425, max=19928, avg=5930.53, stdev=4262.71 00:18:27.156 clat percentiles (usec): 00:18:27.156 | 1.00th=[ 1074], 5.00th=[ 1434], 10.00th=[ 1631], 20.00th=[ 2008], 00:18:27.156 | 30.00th=[ 2474], 40.00th=[ 3130], 50.00th=[ 4293], 60.00th=[ 6587], 00:18:27.156 | 70.00th=[ 8094], 80.00th=[10159], 90.00th=[12256], 95.00th=[13698], 00:18:27.156 | 99.00th=[17171], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:18:27.156 | 99.99th=[20055] 00:18:27.156 bw ( KiB/s): min=62240, max=72032, per=48.85%, avg=67584.00, stdev=5011.99, samples=4 00:18:27.156 iops : min= 3890, max= 4502, avg=4224.00, stdev=313.25, samples=4 00:18:27.156 write: IOPS=4731, BW=73.9MiB/s (77.5MB/s)(137MiB/1853msec); 0 zone resets 00:18:27.156 slat (usec): min=44, max=171, avg=63.97, stdev=21.38 00:18:27.156 clat (usec): min=858, max=32895, avg=15718.37, stdev=5117.10 00:18:27.156 lat (usec): min=906, max=32941, avg=15782.33, stdev=5103.78 00:18:27.156 clat percentiles (usec): 00:18:27.156 | 1.00th=[ 5080], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[10290], 00:18:27.156 | 30.00th=[12518], 40.00th=[15401], 50.00th=[16909], 60.00th=[17957], 00:18:27.156 | 70.00th=[19006], 80.00th=[20055], 90.00th=[21627], 95.00th=[23200], 00:18:27.156 | 99.00th=[26084], 99.50th=[26608], 99.90th=[31327], 99.95th=[31589], 00:18:27.156 | 99.99th=[32900] 00:18:27.156 bw ( KiB/s): min=63808, max=76512, per=92.65%, avg=70144.00, stdev=6399.73, samples=4 00:18:27.156 iops : min= 3988, max= 4782, avg=4384.00, stdev=399.98, samples=4 00:18:27.156 lat (usec) : 500=0.02%, 750=0.13%, 1000=0.34% 00:18:27.156 lat (msec) : 2=12.59%, 4=18.85%, 10=26.61%, 20=34.89%, 50=6.58% 00:18:27.156 cpu : usr=97.41%, sys=0.90%, ctx=120, majf=0, minf=42 00:18:27.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:27.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.156 issued rwts: total=17146,8768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.156 00:18:27.156 Run status group 0 (all jobs): 00:18:27.156 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=268MiB (281MB), run=1983-1983msec 00:18:27.156 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=137MiB (144MB), run=1853-1853msec 00:18:27.156 13:47:29 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.156 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.156 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:27.156 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.156 13:47:29 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:27.156 13:47:29 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:27.156 13:47:29 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:27.156 13:47:29 -- host/fio.sh@84 -- # nvmftestfini 00:18:27.156 13:47:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:27.156 13:47:29 -- nvmf/common.sh@117 -- # sync 00:18:27.156 13:47:29 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:27.156 13:47:29 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:27.156 13:47:29 -- nvmf/common.sh@120 -- # set +e 00:18:27.156 13:47:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.156 13:47:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:27.156 rmmod nvme_rdma 00:18:27.156 rmmod nvme_fabrics 00:18:27.156 13:47:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.156 13:47:29 -- nvmf/common.sh@124 -- # set -e 00:18:27.156 13:47:29 -- nvmf/common.sh@125 -- # return 0 00:18:27.156 13:47:29 -- nvmf/common.sh@478 -- # '[' -n 1179623 ']' 00:18:27.156 13:47:29 -- nvmf/common.sh@479 -- # killprocess 1179623 00:18:27.156 13:47:29 -- common/autotest_common.sh@936 -- # '[' -z 1179623 ']' 00:18:27.156 13:47:29 -- common/autotest_common.sh@940 -- # kill -0 1179623 00:18:27.156 13:47:29 -- common/autotest_common.sh@941 -- # uname 00:18:27.156 13:47:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.156 13:47:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1179623 00:18:27.156 13:47:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.157 13:47:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.157 13:47:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1179623' 00:18:27.157 killing process with pid 1179623 00:18:27.157 13:47:29 -- common/autotest_common.sh@955 -- # kill 1179623 00:18:27.157 13:47:29 -- common/autotest_common.sh@960 -- # wait 1179623 00:18:27.416 13:47:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:27.416 13:47:30 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:27.416 00:18:27.416 real 0m9.387s 00:18:27.416 user 0m30.624s 00:18:27.416 sys 0m2.744s 00:18:27.416 13:47:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:27.416 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.416 ************************************ 00:18:27.416 END TEST nvmf_fio_host 00:18:27.416 ************************************ 00:18:27.416 13:47:30 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:27.416 13:47:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:27.416 13:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:27.416 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.676 ************************************ 00:18:27.676 START TEST nvmf_failover 00:18:27.676 ************************************ 00:18:27.676 13:47:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:27.676 * Looking for test storage... 00:18:27.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:27.676 13:47:30 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.676 13:47:30 -- nvmf/common.sh@7 -- # uname -s 00:18:27.676 13:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.676 13:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.676 13:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.676 13:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.676 13:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.676 13:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.676 13:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.676 13:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.676 13:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.676 13:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.676 13:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:27.676 13:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:18:27.676 13:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.676 13:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.676 13:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.676 13:47:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.676 13:47:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:27.676 13:47:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.676 13:47:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.676 13:47:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.676 13:47:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.676 13:47:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.676 13:47:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.676 13:47:30 -- paths/export.sh@5 -- # export PATH 00:18:27.676 13:47:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.676 13:47:30 -- nvmf/common.sh@47 -- # : 0 00:18:27.676 13:47:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.676 13:47:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.676 13:47:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.676 13:47:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.676 13:47:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.676 13:47:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.676 13:47:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.676 13:47:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.676 13:47:30 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.676 13:47:30 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.676 13:47:30 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:27.676 13:47:30 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.676 13:47:30 -- host/failover.sh@18 -- # nvmftestinit 00:18:27.676 13:47:30 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:27.676 13:47:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.676 13:47:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:27.676 13:47:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:27.676 13:47:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:27.676 13:47:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.676 13:47:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.676 13:47:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.676 13:47:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:27.676 13:47:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:27.676 13:47:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.676 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:30.215 13:47:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:30.215 13:47:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.215 13:47:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.215 13:47:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.215 13:47:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.215 13:47:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.215 13:47:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.215 13:47:32 -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.215 13:47:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.215 13:47:32 -- nvmf/common.sh@296 -- # e810=() 00:18:30.215 13:47:32 -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.215 13:47:32 -- nvmf/common.sh@297 -- # x722=() 00:18:30.215 13:47:32 -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.215 13:47:32 -- nvmf/common.sh@298 -- # mlx=() 00:18:30.215 13:47:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.215 13:47:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.215 13:47:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.215 13:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.215 13:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:18:30.215 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:18:30.215 13:47:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.215 13:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.215 13:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:18:30.215 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:18:30.215 13:47:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.215 13:47:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.215 13:47:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.215 13:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.215 13:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.215 13:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.215 13:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:18:30.215 Found net devices under 0000:81:00.0: mlx_0_0 00:18:30.215 13:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.215 13:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.215 13:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.215 13:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.215 13:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:18:30.215 Found net devices under 0000:81:00.1: mlx_0_1 00:18:30.215 13:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.215 13:47:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:30.215 13:47:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:30.215 13:47:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:30.215 13:47:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:30.215 13:47:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:30.215 13:47:32 -- nvmf/common.sh@58 -- # uname 00:18:30.215 13:47:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:30.215 13:47:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:30.215 13:47:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:30.215 13:47:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:30.215 13:47:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:30.215 13:47:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:30.215 13:47:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:30.215 13:47:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:30.215 13:47:33 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:30.215 13:47:33 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:30.215 13:47:33 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:30.215 13:47:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.215 13:47:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:30.215 13:47:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:30.215 13:47:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.489 13:47:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:30.489 13:47:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@105 -- # continue 2 00:18:30.489 13:47:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@105 -- # continue 2 00:18:30.489 13:47:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:30.489 13:47:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:30.489 13:47:33 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:30.489 13:47:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:30.489 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.489 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:18:30.489 altname enp129s0f0np0 00:18:30.489 inet 192.168.100.8/24 scope global mlx_0_0 00:18:30.489 valid_lft forever preferred_lft forever 00:18:30.489 13:47:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:30.489 13:47:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:30.489 13:47:33 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:30.489 13:47:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:30.489 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.489 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:18:30.489 altname enp129s0f1np1 00:18:30.489 inet 192.168.100.9/24 scope global mlx_0_1 00:18:30.489 valid_lft forever preferred_lft forever 00:18:30.489 13:47:33 -- nvmf/common.sh@411 -- # return 0 00:18:30.489 13:47:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:30.489 13:47:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:30.489 13:47:33 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:30.489 13:47:33 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:30.489 13:47:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.489 13:47:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:30.489 13:47:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:30.489 13:47:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.489 13:47:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:30.489 13:47:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@105 -- # continue 2 00:18:30.489 13:47:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.489 13:47:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.489 13:47:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@105 -- # continue 2 00:18:30.489 13:47:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:30.489 13:47:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:30.489 13:47:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:30.489 13:47:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:30.489 13:47:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:30.489 13:47:33 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:30.489 192.168.100.9' 00:18:30.489 13:47:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:30.489 192.168.100.9' 00:18:30.489 13:47:33 -- nvmf/common.sh@446 -- # head -n 1 00:18:30.489 13:47:33 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:30.489 13:47:33 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:30.489 192.168.100.9' 00:18:30.489 13:47:33 -- nvmf/common.sh@447 -- # tail -n +2 00:18:30.489 13:47:33 -- nvmf/common.sh@447 -- # head -n 1 00:18:30.489 13:47:33 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:30.489 13:47:33 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:30.489 13:47:33 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:30.489 13:47:33 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:30.489 13:47:33 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:30.489 13:47:33 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:30.489 13:47:33 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:30.489 13:47:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:30.489 13:47:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:30.489 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 13:47:33 -- nvmf/common.sh@470 -- # nvmfpid=1182539 00:18:30.489 13:47:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:30.489 13:47:33 -- nvmf/common.sh@471 -- # waitforlisten 1182539 00:18:30.489 13:47:33 -- common/autotest_common.sh@817 -- # '[' -z 1182539 ']' 00:18:30.489 13:47:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.489 13:47:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.489 13:47:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.489 13:47:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.489 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:30.489 [2024-04-18 13:47:33.155643] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:30.489 [2024-04-18 13:47:33.155730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.489 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.489 [2024-04-18 13:47:33.245195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:30.748 [2024-04-18 13:47:33.382214] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.748 [2024-04-18 13:47:33.382288] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.748 [2024-04-18 13:47:33.382309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.748 [2024-04-18 13:47:33.382325] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.748 [2024-04-18 13:47:33.382340] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.748 [2024-04-18 13:47:33.382454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.748 [2024-04-18 13:47:33.382509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.748 [2024-04-18 13:47:33.382513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.686 13:47:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.686 13:47:34 -- common/autotest_common.sh@850 -- # return 0 00:18:31.686 13:47:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:31.686 13:47:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:31.686 13:47:34 -- common/autotest_common.sh@10 -- # set +x 00:18:31.686 13:47:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.686 13:47:34 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:31.944 [2024-04-18 13:47:34.544890] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed07d0/0x1ed4cc0) succeed. 00:18:31.944 [2024-04-18 13:47:34.557111] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed1d20/0x1f16350) succeed. 00:18:31.944 13:47:34 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:32.510 Malloc0 00:18:32.510 13:47:35 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:32.768 13:47:35 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.025 13:47:35 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:33.591 [2024-04-18 13:47:36.137541] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:33.591 13:47:36 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:33.849 [2024-04-18 13:47:36.474463] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:33.849 13:47:36 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:34.108 [2024-04-18 13:47:36.767517] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:34.108 13:47:36 -- host/failover.sh@31 -- # bdevperf_pid=1183058 00:18:34.108 13:47:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.108 13:47:36 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:34.108 13:47:36 -- host/failover.sh@34 -- # waitforlisten 1183058 /var/tmp/bdevperf.sock 00:18:34.108 13:47:36 -- common/autotest_common.sh@817 -- # '[' -z 1183058 ']' 00:18:34.108 13:47:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.108 13:47:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.108 13:47:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.108 13:47:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.108 13:47:36 -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 13:47:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.365 13:47:37 -- common/autotest_common.sh@850 -- # return 0 00:18:34.365 13:47:37 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.933 NVMe0n1 00:18:34.933 13:47:37 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.501 00:18:35.501 13:47:38 -- host/failover.sh@39 -- # run_test_pid=1183193 00:18:35.501 13:47:38 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.501 13:47:38 -- host/failover.sh@41 -- # sleep 1 00:18:36.447 13:47:39 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:37.039 13:47:39 -- host/failover.sh@45 -- # sleep 3 00:18:40.336 13:47:42 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:40.336 00:18:40.336 13:47:43 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:40.903 13:47:43 -- host/failover.sh@50 -- # sleep 3 00:18:44.191 13:47:46 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:44.191 [2024-04-18 13:47:46.862281] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:44.191 13:47:46 -- host/failover.sh@55 -- # sleep 1 00:18:45.127 13:47:47 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:45.693 13:47:48 -- host/failover.sh@59 -- # wait 1183193 00:18:50.968 0 00:18:50.968 13:47:53 -- host/failover.sh@61 -- # killprocess 1183058 00:18:50.968 13:47:53 -- common/autotest_common.sh@936 -- # '[' -z 1183058 ']' 00:18:50.968 13:47:53 -- common/autotest_common.sh@940 -- # kill -0 1183058 00:18:50.968 13:47:53 -- common/autotest_common.sh@941 -- # uname 00:18:50.968 13:47:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.968 13:47:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183058 00:18:50.968 13:47:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.968 13:47:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.968 13:47:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183058' 00:18:50.968 killing process with pid 1183058 00:18:50.968 13:47:53 -- common/autotest_common.sh@955 -- # kill 1183058 00:18:50.968 13:47:53 -- common/autotest_common.sh@960 -- # wait 1183058 00:18:50.968 13:47:53 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:50.968 [2024-04-18 13:47:36.837674] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:50.968 [2024-04-18 13:47:36.837785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183058 ] 00:18:50.968 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.969 [2024-04-18 13:47:36.924046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.969 [2024-04-18 13:47:37.044270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.969 Running I/O for 15 seconds... 00:18:50.969 [2024-04-18 13:47:40.643444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.643974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.643991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.969 [2024-04-18 13:47:40.644687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.969 [2024-04-18 13:47:40.644705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.644979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.644995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.645964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.645979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.646001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.970 [2024-04-18 13:47:40.646017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.970 [2024-04-18 13:47:40.646044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.971 [2024-04-18 13:47:40.646734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.646978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.646995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.647028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.647060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.647093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.647136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.971 [2024-04-18 13:47:40.647168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x39500 00:18:50.971 [2024-04-18 13:47:40.647184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.647853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:40.647868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.649950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.972 [2024-04-18 13:47:40.649984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.972 [2024-04-18 13:47:40.650001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16648 len:8 PRP1 0x0 PRP2 0x0 00:18:50.972 [2024-04-18 13:47:40.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:40.650082] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:18:50.972 [2024-04-18 13:47:40.650104] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:50.972 [2024-04-18 13:47:40.650120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.972 [2024-04-18 13:47:40.653837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.972 [2024-04-18 13:47:40.672735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.972 [2024-04-18 13:47:40.717743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.972 [2024-04-18 13:47:44.531782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:44.531838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.531873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:44.531890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.531910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:44.531926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.531950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.531967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.531995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.972 [2024-04-18 13:47:44.532219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:44.532267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.972 [2024-04-18 13:47:44.532286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x39500 00:18:50.972 [2024-04-18 13:47:44.532302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.973 [2024-04-18 13:47:44.532771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.532975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.532992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.973 [2024-04-18 13:47:44.533190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x39500 00:18:50.973 [2024-04-18 13:47:44.533205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.533970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.533987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.534002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.534035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.534069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.534102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.974 [2024-04-18 13:47:44.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x39500 00:18:50.974 [2024-04-18 13:47:44.534438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.974 [2024-04-18 13:47:44.534455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.534635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.534963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.534994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.975 [2024-04-18 13:47:44.535168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.975 [2024-04-18 13:47:44.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x39500 00:18:50.975 [2024-04-18 13:47:44.535600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:44.535632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:44.535665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:44.535698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.535971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.535988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.536004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.536020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.536036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.536053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.536069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.536086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:44.536101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.538046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.976 [2024-04-18 13:47:44.538080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.976 [2024-04-18 13:47:44.538097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:18:50.976 [2024-04-18 13:47:44.538119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:44.538178] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:18:50.976 [2024-04-18 13:47:44.538200] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:18:50.976 [2024-04-18 13:47:44.538217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.976 [2024-04-18 13:47:44.541866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.976 [2024-04-18 13:47:44.560595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.976 [2024-04-18 13:47:44.610290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.976 [2024-04-18 13:47:49.210446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.976 [2024-04-18 13:47:49.210945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.210992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.211009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.211023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.211040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.211055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.211072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.211106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.211121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.976 [2024-04-18 13:47:49.211138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x39500 00:18:50.976 [2024-04-18 13:47:49.211153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.211765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.211981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.211997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.977 [2024-04-18 13:47:49.212292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.977 [2024-04-18 13:47:49.212308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x39500 00:18:50.977 [2024-04-18 13:47:49.212324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.212984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.212999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.978 [2024-04-18 13:47:49.213221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x39500 00:18:50.978 [2024-04-18 13:47:49.213237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.213647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.213956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.213988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.979 [2024-04-18 13:47:49.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x39500 00:18:50.979 [2024-04-18 13:47:49.214442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.979 [2024-04-18 13:47:49.214459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x39500 00:18:50.980 [2024-04-18 13:47:49.214702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.980 [2024-04-18 13:47:49.214743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.214761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.980 [2024-04-18 13:47:49.214777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:2fe0 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.216668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.980 [2024-04-18 13:47:49.216701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.980 [2024-04-18 13:47:49.216717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35752 len:8 PRP1 0x0 PRP2 0x0 00:18:50.980 [2024-04-18 13:47:49.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.980 [2024-04-18 13:47:49.216791] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:18:50.980 [2024-04-18 13:47:49.216812] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:18:50.980 [2024-04-18 13:47:49.216829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.980 [2024-04-18 13:47:49.220518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.980 [2024-04-18 13:47:49.238932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.980 [2024-04-18 13:47:49.282457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.980 00:18:50.980 Latency(us) 00:18:50.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.980 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:50.980 Verification LBA range: start 0x0 length 0x4000 00:18:50.980 NVMe0n1 : 15.01 10065.71 39.32 209.84 0.00 12427.81 628.05 1025274.31 00:18:50.980 =================================================================================================================== 00:18:50.980 Total : 10065.71 39.32 209.84 0.00 12427.81 628.05 1025274.31 00:18:50.980 Received shutdown signal, test time was about 15.000000 seconds 00:18:50.980 00:18:50.980 Latency(us) 00:18:50.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.980 =================================================================================================================== 00:18:50.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.980 13:47:53 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:50.980 13:47:53 -- host/failover.sh@65 -- # count=3 00:18:50.980 13:47:53 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:50.980 13:47:53 -- host/failover.sh@73 -- # bdevperf_pid=1184911 00:18:50.980 13:47:53 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:50.980 13:47:53 -- host/failover.sh@75 -- # waitforlisten 1184911 /var/tmp/bdevperf.sock 00:18:50.980 13:47:53 -- common/autotest_common.sh@817 -- # '[' -z 1184911 ']' 00:18:50.980 13:47:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.980 13:47:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.980 13:47:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.980 13:47:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.980 13:47:53 -- common/autotest_common.sh@10 -- # set +x 00:18:51.239 13:47:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.239 13:47:53 -- common/autotest_common.sh@850 -- # return 0 00:18:51.239 13:47:53 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:51.497 [2024-04-18 13:47:54.291810] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:51.755 13:47:54 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:52.014 [2024-04-18 13:47:54.641060] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:52.014 13:47:54 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:52.272 NVMe0n1 00:18:52.529 13:47:55 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:52.786 00:18:52.786 13:47:55 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.044 00:18:53.303 13:47:55 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.303 13:47:55 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:53.561 13:47:56 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.820 13:47:56 -- host/failover.sh@87 -- # sleep 3 00:18:57.104 13:47:59 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.104 13:47:59 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:57.104 13:47:59 -- host/failover.sh@90 -- # run_test_pid=1185699 00:18:57.104 13:47:59 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.104 13:47:59 -- host/failover.sh@92 -- # wait 1185699 00:18:58.479 0 00:18:58.479 13:48:01 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:58.479 [2024-04-18 13:47:53.648268] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:58.479 [2024-04-18 13:47:53.648374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184911 ] 00:18:58.479 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.479 [2024-04-18 13:47:53.727772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.479 [2024-04-18 13:47:53.844700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.479 [2024-04-18 13:47:56.505204] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:58.479 [2024-04-18 13:47:56.505766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.479 [2024-04-18 13:47:56.505831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.479 [2024-04-18 13:47:56.533453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:58.479 [2024-04-18 13:47:56.559074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.479 Running I/O for 1 seconds... 00:18:58.479 00:18:58.479 Latency(us) 00:18:58.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.479 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:58.479 Verification LBA range: start 0x0 length 0x4000 00:18:58.480 NVMe0n1 : 1.01 13993.23 54.66 0.00 0.00 9089.28 3519.53 18058.81 00:18:58.480 =================================================================================================================== 00:18:58.480 Total : 13993.23 54.66 0.00 0.00 9089.28 3519.53 18058.81 00:18:58.480 13:48:01 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.480 13:48:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:58.737 13:48:01 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:58.994 13:48:01 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.994 13:48:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:59.559 13:48:02 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.817 13:48:02 -- host/failover.sh@101 -- # sleep 3 00:19:03.109 13:48:05 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.109 13:48:05 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:03.109 13:48:05 -- host/failover.sh@108 -- # killprocess 1184911 00:19:03.109 13:48:05 -- common/autotest_common.sh@936 -- # '[' -z 1184911 ']' 00:19:03.109 13:48:05 -- common/autotest_common.sh@940 -- # kill -0 1184911 00:19:03.109 13:48:05 -- common/autotest_common.sh@941 -- # uname 00:19:03.109 13:48:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.109 13:48:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184911 00:19:03.109 13:48:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:03.109 13:48:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:03.109 13:48:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184911' 00:19:03.109 killing process with pid 1184911 00:19:03.109 13:48:05 -- common/autotest_common.sh@955 -- # kill 1184911 00:19:03.109 13:48:05 -- common/autotest_common.sh@960 -- # wait 1184911 00:19:03.368 13:48:06 -- host/failover.sh@110 -- # sync 00:19:03.368 13:48:06 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.626 13:48:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:03.626 13:48:06 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:03.626 13:48:06 -- host/failover.sh@116 -- # nvmftestfini 00:19:03.626 13:48:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:03.626 13:48:06 -- nvmf/common.sh@117 -- # sync 00:19:03.626 13:48:06 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:03.626 13:48:06 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:03.626 13:48:06 -- nvmf/common.sh@120 -- # set +e 00:19:03.626 13:48:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.626 13:48:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:03.626 rmmod nvme_rdma 00:19:03.626 rmmod nvme_fabrics 00:19:03.626 13:48:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.626 13:48:06 -- nvmf/common.sh@124 -- # set -e 00:19:03.626 13:48:06 -- nvmf/common.sh@125 -- # return 0 00:19:03.626 13:48:06 -- nvmf/common.sh@478 -- # '[' -n 1182539 ']' 00:19:03.626 13:48:06 -- nvmf/common.sh@479 -- # killprocess 1182539 00:19:03.626 13:48:06 -- common/autotest_common.sh@936 -- # '[' -z 1182539 ']' 00:19:03.626 13:48:06 -- common/autotest_common.sh@940 -- # kill -0 1182539 00:19:03.626 13:48:06 -- common/autotest_common.sh@941 -- # uname 00:19:03.626 13:48:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.626 13:48:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1182539 00:19:03.885 13:48:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:03.885 13:48:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:03.885 13:48:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1182539' 00:19:03.885 killing process with pid 1182539 00:19:03.885 13:48:06 -- common/autotest_common.sh@955 -- # kill 1182539 00:19:03.885 13:48:06 -- common/autotest_common.sh@960 -- # wait 1182539 00:19:04.145 13:48:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:04.145 13:48:06 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:04.145 00:19:04.145 real 0m36.529s 00:19:04.145 user 2m18.313s 00:19:04.145 sys 0m4.748s 00:19:04.145 13:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:04.145 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:04.145 ************************************ 00:19:04.145 END TEST nvmf_failover 00:19:04.145 ************************************ 00:19:04.145 13:48:06 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:04.145 13:48:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.145 13:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.145 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:04.405 ************************************ 00:19:04.405 START TEST nvmf_discovery 00:19:04.405 ************************************ 00:19:04.405 13:48:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:04.405 * Looking for test storage... 00:19:04.405 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:04.405 13:48:07 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.405 13:48:07 -- nvmf/common.sh@7 -- # uname -s 00:19:04.405 13:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.405 13:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.405 13:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.405 13:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.405 13:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.405 13:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.405 13:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.405 13:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.405 13:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.405 13:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.405 13:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.405 13:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.405 13:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.405 13:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.405 13:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.405 13:48:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.405 13:48:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:04.405 13:48:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.405 13:48:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.405 13:48:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.405 13:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.405 13:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.405 13:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.405 13:48:07 -- paths/export.sh@5 -- # export PATH 00:19:04.405 13:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.405 13:48:07 -- nvmf/common.sh@47 -- # : 0 00:19:04.405 13:48:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.405 13:48:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.405 13:48:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.405 13:48:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.405 13:48:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.405 13:48:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.405 13:48:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.405 13:48:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.405 13:48:07 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:19:04.405 13:48:07 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:04.405 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:04.405 13:48:07 -- host/discovery.sh@13 -- # exit 0 00:19:04.405 00:19:04.405 real 0m0.080s 00:19:04.405 user 0m0.034s 00:19:04.405 sys 0m0.053s 00:19:04.405 13:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:04.405 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.405 ************************************ 00:19:04.405 END TEST nvmf_discovery 00:19:04.405 ************************************ 00:19:04.405 13:48:07 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:04.405 13:48:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.405 13:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.405 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.665 ************************************ 00:19:04.665 START TEST nvmf_discovery_remove_ifc 00:19:04.665 ************************************ 00:19:04.665 13:48:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:04.665 * Looking for test storage... 00:19:04.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:04.665 13:48:07 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.665 13:48:07 -- nvmf/common.sh@7 -- # uname -s 00:19:04.665 13:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.665 13:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.665 13:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.665 13:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.665 13:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.665 13:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.665 13:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.665 13:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.665 13:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.665 13:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.665 13:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.665 13:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.665 13:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.665 13:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.665 13:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.665 13:48:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.665 13:48:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:04.665 13:48:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.665 13:48:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.665 13:48:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.665 13:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.665 13:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.665 13:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.665 13:48:07 -- paths/export.sh@5 -- # export PATH 00:19:04.665 13:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.665 13:48:07 -- nvmf/common.sh@47 -- # : 0 00:19:04.665 13:48:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.665 13:48:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.665 13:48:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.665 13:48:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.665 13:48:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.665 13:48:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.665 13:48:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.665 13:48:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.665 13:48:07 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:19:04.665 13:48:07 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:04.665 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:04.665 13:48:07 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:19:04.665 00:19:04.665 real 0m0.075s 00:19:04.665 user 0m0.040s 00:19:04.665 sys 0m0.041s 00:19:04.665 13:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:04.665 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.665 ************************************ 00:19:04.665 END TEST nvmf_discovery_remove_ifc 00:19:04.665 ************************************ 00:19:04.665 13:48:07 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:04.665 13:48:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.665 13:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.665 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.665 ************************************ 00:19:04.665 START TEST nvmf_identify_kernel_target 00:19:04.665 ************************************ 00:19:04.665 13:48:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:04.924 * Looking for test storage... 00:19:04.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:04.924 13:48:07 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.924 13:48:07 -- nvmf/common.sh@7 -- # uname -s 00:19:04.924 13:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.924 13:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.924 13:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.924 13:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.924 13:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.924 13:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.924 13:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.924 13:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.924 13:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.924 13:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.924 13:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.924 13:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.924 13:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.924 13:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.924 13:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.924 13:48:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.924 13:48:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:04.924 13:48:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.924 13:48:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.924 13:48:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.925 13:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.925 13:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.925 13:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.925 13:48:07 -- paths/export.sh@5 -- # export PATH 00:19:04.925 13:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.925 13:48:07 -- nvmf/common.sh@47 -- # : 0 00:19:04.925 13:48:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.925 13:48:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.925 13:48:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.925 13:48:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.925 13:48:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.925 13:48:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.925 13:48:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.925 13:48:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.925 13:48:07 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:04.925 13:48:07 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:04.925 13:48:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.925 13:48:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:04.925 13:48:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:04.925 13:48:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:04.925 13:48:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.925 13:48:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.925 13:48:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.925 13:48:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:04.925 13:48:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:04.925 13:48:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.925 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:19:07.457 13:48:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:07.457 13:48:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.457 13:48:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.457 13:48:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.457 13:48:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.457 13:48:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.457 13:48:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.457 13:48:10 -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.457 13:48:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.457 13:48:10 -- nvmf/common.sh@296 -- # e810=() 00:19:07.457 13:48:10 -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.457 13:48:10 -- nvmf/common.sh@297 -- # x722=() 00:19:07.457 13:48:10 -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.457 13:48:10 -- nvmf/common.sh@298 -- # mlx=() 00:19:07.457 13:48:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.457 13:48:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.457 13:48:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.458 13:48:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.458 13:48:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.458 13:48:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.458 13:48:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.458 13:48:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.458 13:48:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.458 13:48:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:19:07.458 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:19:07.458 13:48:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.458 13:48:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.458 13:48:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:19:07.458 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:19:07.458 13:48:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.458 13:48:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.458 13:48:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.458 13:48:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.458 13:48:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:07.458 13:48:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.458 13:48:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:19:07.458 Found net devices under 0000:81:00.0: mlx_0_0 00:19:07.458 13:48:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.458 13:48:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.458 13:48:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:07.458 13:48:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.458 13:48:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:19:07.458 Found net devices under 0000:81:00.1: mlx_0_1 00:19:07.458 13:48:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.458 13:48:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:07.458 13:48:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:07.458 13:48:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:07.458 13:48:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:07.458 13:48:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:07.458 13:48:10 -- nvmf/common.sh@58 -- # uname 00:19:07.458 13:48:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:07.458 13:48:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:07.458 13:48:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:07.458 13:48:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:07.458 13:48:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:07.458 13:48:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:07.458 13:48:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:07.458 13:48:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:07.458 13:48:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:07.458 13:48:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.458 13:48:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:07.458 13:48:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.458 13:48:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:07.723 13:48:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:07.723 13:48:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.724 13:48:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:07.724 13:48:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@105 -- # continue 2 00:19:07.724 13:48:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@105 -- # continue 2 00:19:07.724 13:48:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:07.724 13:48:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.724 13:48:10 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:07.724 13:48:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:07.724 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.724 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:19:07.724 altname enp129s0f0np0 00:19:07.724 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.724 valid_lft forever preferred_lft forever 00:19:07.724 13:48:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:07.724 13:48:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.724 13:48:10 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:07.724 13:48:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:07.724 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.724 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:19:07.724 altname enp129s0f1np1 00:19:07.724 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.724 valid_lft forever preferred_lft forever 00:19:07.724 13:48:10 -- nvmf/common.sh@411 -- # return 0 00:19:07.724 13:48:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:07.724 13:48:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.724 13:48:10 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:07.724 13:48:10 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:07.724 13:48:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.724 13:48:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:07.724 13:48:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:07.724 13:48:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.724 13:48:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:07.724 13:48:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@105 -- # continue 2 00:19:07.724 13:48:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.724 13:48:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@105 -- # continue 2 00:19:07.724 13:48:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:07.724 13:48:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.724 13:48:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:07.724 13:48:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:07.724 13:48:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:07.724 13:48:10 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.724 192.168.100.9' 00:19:07.724 13:48:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:07.724 192.168.100.9' 00:19:07.724 13:48:10 -- nvmf/common.sh@446 -- # head -n 1 00:19:07.724 13:48:10 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.724 13:48:10 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:07.724 192.168.100.9' 00:19:07.724 13:48:10 -- nvmf/common.sh@447 -- # tail -n +2 00:19:07.724 13:48:10 -- nvmf/common.sh@447 -- # head -n 1 00:19:07.724 13:48:10 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.724 13:48:10 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:07.724 13:48:10 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.724 13:48:10 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:07.724 13:48:10 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:07.724 13:48:10 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:07.724 13:48:10 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:07.724 13:48:10 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:07.724 13:48:10 -- nvmf/common.sh@717 -- # local ip 00:19:07.724 13:48:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.724 13:48:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.724 13:48:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.724 13:48:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.724 13:48:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:07.724 13:48:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:07.724 13:48:10 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:19:07.724 13:48:10 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:19:07.724 13:48:10 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:19:07.724 13:48:10 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:07.724 13:48:10 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.724 13:48:10 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.724 13:48:10 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:07.724 13:48:10 -- nvmf/common.sh@628 -- # local block nvme 00:19:07.724 13:48:10 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:07.724 13:48:10 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:07.724 13:48:10 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:09.102 Waiting for block devices as requested 00:19:09.102 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:19:09.102 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:09.102 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:09.361 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:09.361 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:09.361 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:09.361 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:09.619 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:09.619 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:09.619 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:09.620 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:09.878 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:09.878 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:09.878 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:09.878 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:10.136 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:10.136 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:10.136 13:48:12 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:10.136 13:48:12 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:10.136 13:48:12 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:10.136 13:48:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:10.136 13:48:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:10.136 13:48:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:10.136 13:48:12 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:10.136 13:48:12 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:10.136 13:48:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:10.404 No valid GPT data, bailing 00:19:10.404 13:48:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:10.404 13:48:12 -- scripts/common.sh@391 -- # pt= 00:19:10.404 13:48:12 -- scripts/common.sh@392 -- # return 1 00:19:10.404 13:48:12 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:10.404 13:48:12 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:10.404 13:48:12 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:10.404 13:48:12 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:10.404 13:48:12 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:10.404 13:48:12 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:10.404 13:48:12 -- nvmf/common.sh@656 -- # echo 1 00:19:10.404 13:48:12 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:10.404 13:48:12 -- nvmf/common.sh@658 -- # echo 1 00:19:10.404 13:48:12 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:19:10.404 13:48:12 -- nvmf/common.sh@661 -- # echo rdma 00:19:10.404 13:48:12 -- nvmf/common.sh@662 -- # echo 4420 00:19:10.404 13:48:12 -- nvmf/common.sh@663 -- # echo ipv4 00:19:10.404 13:48:12 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:10.404 13:48:12 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -t rdma -s 4420 00:19:10.404 00:19:10.404 Discovery Log Number of Records 2, Generation counter 2 00:19:10.404 =====Discovery Log Entry 0====== 00:19:10.404 trtype: rdma 00:19:10.404 adrfam: ipv4 00:19:10.404 subtype: current discovery subsystem 00:19:10.404 treq: not specified, sq flow control disable supported 00:19:10.404 portid: 1 00:19:10.404 trsvcid: 4420 00:19:10.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:10.404 traddr: 192.168.100.8 00:19:10.404 eflags: none 00:19:10.404 rdma_prtype: not specified 00:19:10.404 rdma_qptype: connected 00:19:10.404 rdma_cms: rdma-cm 00:19:10.404 rdma_pkey: 0x0000 00:19:10.404 =====Discovery Log Entry 1====== 00:19:10.404 trtype: rdma 00:19:10.404 adrfam: ipv4 00:19:10.404 subtype: nvme subsystem 00:19:10.404 treq: not specified, sq flow control disable supported 00:19:10.404 portid: 1 00:19:10.404 trsvcid: 4420 00:19:10.404 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:10.404 traddr: 192.168.100.8 00:19:10.404 eflags: none 00:19:10.404 rdma_prtype: not specified 00:19:10.404 rdma_qptype: connected 00:19:10.404 rdma_cms: rdma-cm 00:19:10.404 rdma_pkey: 0x0000 00:19:10.404 13:48:13 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:19:10.404 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:10.404 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.404 ===================================================== 00:19:10.404 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:10.404 ===================================================== 00:19:10.404 Controller Capabilities/Features 00:19:10.404 ================================ 00:19:10.404 Vendor ID: 0000 00:19:10.404 Subsystem Vendor ID: 0000 00:19:10.404 Serial Number: 9a1bb68ef3358cec27ec 00:19:10.404 Model Number: Linux 00:19:10.404 Firmware Version: 6.7.0-68 00:19:10.404 Recommended Arb Burst: 0 00:19:10.404 IEEE OUI Identifier: 00 00 00 00:19:10.404 Multi-path I/O 00:19:10.404 May have multiple subsystem ports: No 00:19:10.404 May have multiple controllers: No 00:19:10.404 Associated with SR-IOV VF: No 00:19:10.404 Max Data Transfer Size: Unlimited 00:19:10.404 Max Number of Namespaces: 0 00:19:10.404 Max Number of I/O Queues: 1024 00:19:10.404 NVMe Specification Version (VS): 1.3 00:19:10.404 NVMe Specification Version (Identify): 1.3 00:19:10.404 Maximum Queue Entries: 128 00:19:10.404 Contiguous Queues Required: No 00:19:10.404 Arbitration Mechanisms Supported 00:19:10.404 Weighted Round Robin: Not Supported 00:19:10.404 Vendor Specific: Not Supported 00:19:10.404 Reset Timeout: 7500 ms 00:19:10.404 Doorbell Stride: 4 bytes 00:19:10.404 NVM Subsystem Reset: Not Supported 00:19:10.404 Command Sets Supported 00:19:10.404 NVM Command Set: Supported 00:19:10.404 Boot Partition: Not Supported 00:19:10.404 Memory Page Size Minimum: 4096 bytes 00:19:10.404 Memory Page Size Maximum: 4096 bytes 00:19:10.404 Persistent Memory Region: Not Supported 00:19:10.404 Optional Asynchronous Events Supported 00:19:10.404 Namespace Attribute Notices: Not Supported 00:19:10.404 Firmware Activation Notices: Not Supported 00:19:10.404 ANA Change Notices: Not Supported 00:19:10.404 PLE Aggregate Log Change Notices: Not Supported 00:19:10.404 LBA Status Info Alert Notices: Not Supported 00:19:10.404 EGE Aggregate Log Change Notices: Not Supported 00:19:10.404 Normal NVM Subsystem Shutdown event: Not Supported 00:19:10.404 Zone Descriptor Change Notices: Not Supported 00:19:10.404 Discovery Log Change Notices: Supported 00:19:10.404 Controller Attributes 00:19:10.404 128-bit Host Identifier: Not Supported 00:19:10.404 Non-Operational Permissive Mode: Not Supported 00:19:10.404 NVM Sets: Not Supported 00:19:10.404 Read Recovery Levels: Not Supported 00:19:10.404 Endurance Groups: Not Supported 00:19:10.404 Predictable Latency Mode: Not Supported 00:19:10.404 Traffic Based Keep ALive: Not Supported 00:19:10.404 Namespace Granularity: Not Supported 00:19:10.404 SQ Associations: Not Supported 00:19:10.404 UUID List: Not Supported 00:19:10.404 Multi-Domain Subsystem: Not Supported 00:19:10.404 Fixed Capacity Management: Not Supported 00:19:10.404 Variable Capacity Management: Not Supported 00:19:10.404 Delete Endurance Group: Not Supported 00:19:10.404 Delete NVM Set: Not Supported 00:19:10.404 Extended LBA Formats Supported: Not Supported 00:19:10.404 Flexible Data Placement Supported: Not Supported 00:19:10.404 00:19:10.404 Controller Memory Buffer Support 00:19:10.404 ================================ 00:19:10.404 Supported: No 00:19:10.404 00:19:10.404 Persistent Memory Region Support 00:19:10.404 ================================ 00:19:10.404 Supported: No 00:19:10.404 00:19:10.404 Admin Command Set Attributes 00:19:10.404 ============================ 00:19:10.404 Security Send/Receive: Not Supported 00:19:10.404 Format NVM: Not Supported 00:19:10.404 Firmware Activate/Download: Not Supported 00:19:10.404 Namespace Management: Not Supported 00:19:10.404 Device Self-Test: Not Supported 00:19:10.404 Directives: Not Supported 00:19:10.404 NVMe-MI: Not Supported 00:19:10.405 Virtualization Management: Not Supported 00:19:10.405 Doorbell Buffer Config: Not Supported 00:19:10.405 Get LBA Status Capability: Not Supported 00:19:10.405 Command & Feature Lockdown Capability: Not Supported 00:19:10.405 Abort Command Limit: 1 00:19:10.405 Async Event Request Limit: 1 00:19:10.405 Number of Firmware Slots: N/A 00:19:10.405 Firmware Slot 1 Read-Only: N/A 00:19:10.405 Firmware Activation Without Reset: N/A 00:19:10.405 Multiple Update Detection Support: N/A 00:19:10.405 Firmware Update Granularity: No Information Provided 00:19:10.405 Per-Namespace SMART Log: No 00:19:10.405 Asymmetric Namespace Access Log Page: Not Supported 00:19:10.405 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:10.405 Command Effects Log Page: Not Supported 00:19:10.405 Get Log Page Extended Data: Supported 00:19:10.405 Telemetry Log Pages: Not Supported 00:19:10.405 Persistent Event Log Pages: Not Supported 00:19:10.405 Supported Log Pages Log Page: May Support 00:19:10.405 Commands Supported & Effects Log Page: Not Supported 00:19:10.405 Feature Identifiers & Effects Log Page:May Support 00:19:10.405 NVMe-MI Commands & Effects Log Page: May Support 00:19:10.405 Data Area 4 for Telemetry Log: Not Supported 00:19:10.405 Error Log Page Entries Supported: 1 00:19:10.405 Keep Alive: Not Supported 00:19:10.405 00:19:10.405 NVM Command Set Attributes 00:19:10.405 ========================== 00:19:10.405 Submission Queue Entry Size 00:19:10.405 Max: 1 00:19:10.405 Min: 1 00:19:10.405 Completion Queue Entry Size 00:19:10.405 Max: 1 00:19:10.405 Min: 1 00:19:10.405 Number of Namespaces: 0 00:19:10.405 Compare Command: Not Supported 00:19:10.405 Write Uncorrectable Command: Not Supported 00:19:10.405 Dataset Management Command: Not Supported 00:19:10.405 Write Zeroes Command: Not Supported 00:19:10.405 Set Features Save Field: Not Supported 00:19:10.405 Reservations: Not Supported 00:19:10.405 Timestamp: Not Supported 00:19:10.405 Copy: Not Supported 00:19:10.405 Volatile Write Cache: Not Present 00:19:10.405 Atomic Write Unit (Normal): 1 00:19:10.405 Atomic Write Unit (PFail): 1 00:19:10.405 Atomic Compare & Write Unit: 1 00:19:10.405 Fused Compare & Write: Not Supported 00:19:10.405 Scatter-Gather List 00:19:10.405 SGL Command Set: Supported 00:19:10.405 SGL Keyed: Supported 00:19:10.405 SGL Bit Bucket Descriptor: Not Supported 00:19:10.405 SGL Metadata Pointer: Not Supported 00:19:10.405 Oversized SGL: Not Supported 00:19:10.405 SGL Metadata Address: Not Supported 00:19:10.405 SGL Offset: Supported 00:19:10.405 Transport SGL Data Block: Not Supported 00:19:10.405 Replay Protected Memory Block: Not Supported 00:19:10.405 00:19:10.405 Firmware Slot Information 00:19:10.405 ========================= 00:19:10.405 Active slot: 0 00:19:10.405 00:19:10.405 00:19:10.405 Error Log 00:19:10.405 ========= 00:19:10.405 00:19:10.405 Active Namespaces 00:19:10.405 ================= 00:19:10.405 Discovery Log Page 00:19:10.405 ================== 00:19:10.405 Generation Counter: 2 00:19:10.405 Number of Records: 2 00:19:10.405 Record Format: 0 00:19:10.405 00:19:10.405 Discovery Log Entry 0 00:19:10.405 ---------------------- 00:19:10.405 Transport Type: 1 (RDMA) 00:19:10.405 Address Family: 1 (IPv4) 00:19:10.405 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:10.405 Entry Flags: 00:19:10.405 Duplicate Returned Information: 0 00:19:10.405 Explicit Persistent Connection Support for Discovery: 0 00:19:10.405 Transport Requirements: 00:19:10.405 Secure Channel: Not Specified 00:19:10.405 Port ID: 1 (0x0001) 00:19:10.405 Controller ID: 65535 (0xffff) 00:19:10.405 Admin Max SQ Size: 32 00:19:10.405 Transport Service Identifier: 4420 00:19:10.405 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:10.405 Transport Address: 192.168.100.8 00:19:10.405 Transport Specific Address Subtype - RDMA 00:19:10.405 RDMA QP Service Type: 1 (Reliable Connected) 00:19:10.405 RDMA Provider Type: 1 (No provider specified) 00:19:10.405 RDMA CM Service: 1 (RDMA_CM) 00:19:10.405 Discovery Log Entry 1 00:19:10.405 ---------------------- 00:19:10.405 Transport Type: 1 (RDMA) 00:19:10.405 Address Family: 1 (IPv4) 00:19:10.405 Subsystem Type: 2 (NVM Subsystem) 00:19:10.405 Entry Flags: 00:19:10.405 Duplicate Returned Information: 0 00:19:10.405 Explicit Persistent Connection Support for Discovery: 0 00:19:10.405 Transport Requirements: 00:19:10.405 Secure Channel: Not Specified 00:19:10.405 Port ID: 1 (0x0001) 00:19:10.405 Controller ID: 65535 (0xffff) 00:19:10.405 Admin Max SQ Size: 32 00:19:10.405 Transport Service Identifier: 4420 00:19:10.405 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:10.405 Transport Address: 192.168.100.8 00:19:10.405 Transport Specific Address Subtype - RDMA 00:19:10.405 RDMA QP Service Type: 1 (Reliable Connected) 00:19:10.664 RDMA Provider Type: 1 (No provider specified) 00:19:10.664 RDMA CM Service: 1 (RDMA_CM) 00:19:10.664 13:48:13 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:10.664 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.664 get_feature(0x01) failed 00:19:10.664 get_feature(0x02) failed 00:19:10.664 get_feature(0x04) failed 00:19:10.664 ===================================================== 00:19:10.664 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:19:10.664 ===================================================== 00:19:10.664 Controller Capabilities/Features 00:19:10.664 ================================ 00:19:10.664 Vendor ID: 0000 00:19:10.664 Subsystem Vendor ID: 0000 00:19:10.664 Serial Number: ba60463d8adc8ff8e149 00:19:10.664 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:10.664 Firmware Version: 6.7.0-68 00:19:10.664 Recommended Arb Burst: 6 00:19:10.664 IEEE OUI Identifier: 00 00 00 00:19:10.664 Multi-path I/O 00:19:10.664 May have multiple subsystem ports: Yes 00:19:10.664 May have multiple controllers: Yes 00:19:10.664 Associated with SR-IOV VF: No 00:19:10.664 Max Data Transfer Size: 1048576 00:19:10.664 Max Number of Namespaces: 1024 00:19:10.664 Max Number of I/O Queues: 128 00:19:10.664 NVMe Specification Version (VS): 1.3 00:19:10.664 NVMe Specification Version (Identify): 1.3 00:19:10.664 Maximum Queue Entries: 128 00:19:10.664 Contiguous Queues Required: No 00:19:10.664 Arbitration Mechanisms Supported 00:19:10.664 Weighted Round Robin: Not Supported 00:19:10.664 Vendor Specific: Not Supported 00:19:10.664 Reset Timeout: 7500 ms 00:19:10.664 Doorbell Stride: 4 bytes 00:19:10.664 NVM Subsystem Reset: Not Supported 00:19:10.664 Command Sets Supported 00:19:10.664 NVM Command Set: Supported 00:19:10.664 Boot Partition: Not Supported 00:19:10.664 Memory Page Size Minimum: 4096 bytes 00:19:10.664 Memory Page Size Maximum: 4096 bytes 00:19:10.664 Persistent Memory Region: Not Supported 00:19:10.664 Optional Asynchronous Events Supported 00:19:10.664 Namespace Attribute Notices: Supported 00:19:10.664 Firmware Activation Notices: Not Supported 00:19:10.664 ANA Change Notices: Supported 00:19:10.664 PLE Aggregate Log Change Notices: Not Supported 00:19:10.665 LBA Status Info Alert Notices: Not Supported 00:19:10.665 EGE Aggregate Log Change Notices: Not Supported 00:19:10.665 Normal NVM Subsystem Shutdown event: Not Supported 00:19:10.665 Zone Descriptor Change Notices: Not Supported 00:19:10.665 Discovery Log Change Notices: Not Supported 00:19:10.665 Controller Attributes 00:19:10.665 128-bit Host Identifier: Supported 00:19:10.665 Non-Operational Permissive Mode: Not Supported 00:19:10.665 NVM Sets: Not Supported 00:19:10.665 Read Recovery Levels: Not Supported 00:19:10.665 Endurance Groups: Not Supported 00:19:10.665 Predictable Latency Mode: Not Supported 00:19:10.665 Traffic Based Keep ALive: Supported 00:19:10.665 Namespace Granularity: Not Supported 00:19:10.665 SQ Associations: Not Supported 00:19:10.665 UUID List: Not Supported 00:19:10.665 Multi-Domain Subsystem: Not Supported 00:19:10.665 Fixed Capacity Management: Not Supported 00:19:10.665 Variable Capacity Management: Not Supported 00:19:10.665 Delete Endurance Group: Not Supported 00:19:10.665 Delete NVM Set: Not Supported 00:19:10.665 Extended LBA Formats Supported: Not Supported 00:19:10.665 Flexible Data Placement Supported: Not Supported 00:19:10.665 00:19:10.665 Controller Memory Buffer Support 00:19:10.665 ================================ 00:19:10.665 Supported: No 00:19:10.665 00:19:10.665 Persistent Memory Region Support 00:19:10.665 ================================ 00:19:10.665 Supported: No 00:19:10.665 00:19:10.665 Admin Command Set Attributes 00:19:10.665 ============================ 00:19:10.665 Security Send/Receive: Not Supported 00:19:10.665 Format NVM: Not Supported 00:19:10.665 Firmware Activate/Download: Not Supported 00:19:10.665 Namespace Management: Not Supported 00:19:10.665 Device Self-Test: Not Supported 00:19:10.665 Directives: Not Supported 00:19:10.665 NVMe-MI: Not Supported 00:19:10.665 Virtualization Management: Not Supported 00:19:10.665 Doorbell Buffer Config: Not Supported 00:19:10.665 Get LBA Status Capability: Not Supported 00:19:10.665 Command & Feature Lockdown Capability: Not Supported 00:19:10.665 Abort Command Limit: 4 00:19:10.665 Async Event Request Limit: 4 00:19:10.665 Number of Firmware Slots: N/A 00:19:10.665 Firmware Slot 1 Read-Only: N/A 00:19:10.665 Firmware Activation Without Reset: N/A 00:19:10.665 Multiple Update Detection Support: N/A 00:19:10.665 Firmware Update Granularity: No Information Provided 00:19:10.665 Per-Namespace SMART Log: Yes 00:19:10.665 Asymmetric Namespace Access Log Page: Supported 00:19:10.665 ANA Transition Time : 10 sec 00:19:10.665 00:19:10.665 Asymmetric Namespace Access Capabilities 00:19:10.665 ANA Optimized State : Supported 00:19:10.665 ANA Non-Optimized State : Supported 00:19:10.665 ANA Inaccessible State : Supported 00:19:10.665 ANA Persistent Loss State : Supported 00:19:10.665 ANA Change State : Supported 00:19:10.665 ANAGRPID is not changed : No 00:19:10.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:10.665 00:19:10.665 ANA Group Identifier Maximum : 128 00:19:10.665 Number of ANA Group Identifiers : 128 00:19:10.665 Max Number of Allowed Namespaces : 1024 00:19:10.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:10.665 Command Effects Log Page: Supported 00:19:10.665 Get Log Page Extended Data: Supported 00:19:10.665 Telemetry Log Pages: Not Supported 00:19:10.665 Persistent Event Log Pages: Not Supported 00:19:10.665 Supported Log Pages Log Page: May Support 00:19:10.665 Commands Supported & Effects Log Page: Not Supported 00:19:10.665 Feature Identifiers & Effects Log Page:May Support 00:19:10.665 NVMe-MI Commands & Effects Log Page: May Support 00:19:10.665 Data Area 4 for Telemetry Log: Not Supported 00:19:10.665 Error Log Page Entries Supported: 128 00:19:10.665 Keep Alive: Supported 00:19:10.665 Keep Alive Granularity: 1000 ms 00:19:10.665 00:19:10.665 NVM Command Set Attributes 00:19:10.665 ========================== 00:19:10.665 Submission Queue Entry Size 00:19:10.665 Max: 64 00:19:10.665 Min: 64 00:19:10.665 Completion Queue Entry Size 00:19:10.665 Max: 16 00:19:10.665 Min: 16 00:19:10.665 Number of Namespaces: 1024 00:19:10.665 Compare Command: Not Supported 00:19:10.665 Write Uncorrectable Command: Not Supported 00:19:10.665 Dataset Management Command: Supported 00:19:10.665 Write Zeroes Command: Supported 00:19:10.665 Set Features Save Field: Not Supported 00:19:10.665 Reservations: Not Supported 00:19:10.665 Timestamp: Not Supported 00:19:10.665 Copy: Not Supported 00:19:10.665 Volatile Write Cache: Present 00:19:10.665 Atomic Write Unit (Normal): 1 00:19:10.665 Atomic Write Unit (PFail): 1 00:19:10.665 Atomic Compare & Write Unit: 1 00:19:10.665 Fused Compare & Write: Not Supported 00:19:10.665 Scatter-Gather List 00:19:10.665 SGL Command Set: Supported 00:19:10.665 SGL Keyed: Supported 00:19:10.665 SGL Bit Bucket Descriptor: Not Supported 00:19:10.665 SGL Metadata Pointer: Not Supported 00:19:10.665 Oversized SGL: Not Supported 00:19:10.665 SGL Metadata Address: Not Supported 00:19:10.665 SGL Offset: Supported 00:19:10.665 Transport SGL Data Block: Not Supported 00:19:10.665 Replay Protected Memory Block: Not Supported 00:19:10.665 00:19:10.665 Firmware Slot Information 00:19:10.665 ========================= 00:19:10.665 Active slot: 0 00:19:10.665 00:19:10.665 Asymmetric Namespace Access 00:19:10.665 =========================== 00:19:10.665 Change Count : 0 00:19:10.665 Number of ANA Group Descriptors : 1 00:19:10.665 ANA Group Descriptor : 0 00:19:10.665 ANA Group ID : 1 00:19:10.665 Number of NSID Values : 1 00:19:10.665 Change Count : 0 00:19:10.665 ANA State : 1 00:19:10.665 Namespace Identifier : 1 00:19:10.665 00:19:10.665 Commands Supported and Effects 00:19:10.665 ============================== 00:19:10.665 Admin Commands 00:19:10.665 -------------- 00:19:10.665 Get Log Page (02h): Supported 00:19:10.665 Identify (06h): Supported 00:19:10.665 Abort (08h): Supported 00:19:10.665 Set Features (09h): Supported 00:19:10.665 Get Features (0Ah): Supported 00:19:10.665 Asynchronous Event Request (0Ch): Supported 00:19:10.665 Keep Alive (18h): Supported 00:19:10.665 I/O Commands 00:19:10.665 ------------ 00:19:10.665 Flush (00h): Supported 00:19:10.665 Write (01h): Supported LBA-Change 00:19:10.665 Read (02h): Supported 00:19:10.665 Write Zeroes (08h): Supported LBA-Change 00:19:10.665 Dataset Management (09h): Supported 00:19:10.665 00:19:10.665 Error Log 00:19:10.665 ========= 00:19:10.665 Entry: 0 00:19:10.665 Error Count: 0x3 00:19:10.665 Submission Queue Id: 0x0 00:19:10.665 Command Id: 0x5 00:19:10.665 Phase Bit: 0 00:19:10.665 Status Code: 0x2 00:19:10.665 Status Code Type: 0x0 00:19:10.665 Do Not Retry: 1 00:19:10.665 Error Location: 0x28 00:19:10.665 LBA: 0x0 00:19:10.665 Namespace: 0x0 00:19:10.665 Vendor Log Page: 0x0 00:19:10.665 ----------- 00:19:10.665 Entry: 1 00:19:10.665 Error Count: 0x2 00:19:10.665 Submission Queue Id: 0x0 00:19:10.665 Command Id: 0x5 00:19:10.665 Phase Bit: 0 00:19:10.665 Status Code: 0x2 00:19:10.665 Status Code Type: 0x0 00:19:10.665 Do Not Retry: 1 00:19:10.665 Error Location: 0x28 00:19:10.665 LBA: 0x0 00:19:10.665 Namespace: 0x0 00:19:10.665 Vendor Log Page: 0x0 00:19:10.665 ----------- 00:19:10.665 Entry: 2 00:19:10.665 Error Count: 0x1 00:19:10.665 Submission Queue Id: 0x0 00:19:10.665 Command Id: 0x0 00:19:10.665 Phase Bit: 0 00:19:10.665 Status Code: 0x2 00:19:10.665 Status Code Type: 0x0 00:19:10.665 Do Not Retry: 1 00:19:10.665 Error Location: 0x28 00:19:10.665 LBA: 0x0 00:19:10.665 Namespace: 0x0 00:19:10.665 Vendor Log Page: 0x0 00:19:10.665 00:19:10.665 Number of Queues 00:19:10.665 ================ 00:19:10.665 Number of I/O Submission Queues: 128 00:19:10.665 Number of I/O Completion Queues: 128 00:19:10.665 00:19:10.665 ZNS Specific Controller Data 00:19:10.665 ============================ 00:19:10.665 Zone Append Size Limit: 0 00:19:10.665 00:19:10.665 00:19:10.665 Active Namespaces 00:19:10.665 ================= 00:19:10.665 get_feature(0x05) failed 00:19:10.665 Namespace ID:1 00:19:10.665 Command Set Identifier: NVM (00h) 00:19:10.665 Deallocate: Supported 00:19:10.665 Deallocated/Unwritten Error: Not Supported 00:19:10.665 Deallocated Read Value: Unknown 00:19:10.665 Deallocate in Write Zeroes: Not Supported 00:19:10.665 Deallocated Guard Field: 0xFFFF 00:19:10.665 Flush: Supported 00:19:10.665 Reservation: Not Supported 00:19:10.665 Namespace Sharing Capabilities: Multiple Controllers 00:19:10.665 Size (in LBAs): 1953525168 (931GiB) 00:19:10.665 Capacity (in LBAs): 1953525168 (931GiB) 00:19:10.666 Utilization (in LBAs): 1953525168 (931GiB) 00:19:10.666 UUID: 6f39ad51-5532-4e56-82c3-aada624f3b26 00:19:10.666 Thin Provisioning: Not Supported 00:19:10.666 Per-NS Atomic Units: Yes 00:19:10.666 Atomic Boundary Size (Normal): 0 00:19:10.666 Atomic Boundary Size (PFail): 0 00:19:10.666 Atomic Boundary Offset: 0 00:19:10.666 NGUID/EUI64 Never Reused: No 00:19:10.666 ANA group ID: 1 00:19:10.666 Namespace Write Protected: No 00:19:10.666 Number of LBA Formats: 1 00:19:10.666 Current LBA Format: LBA Format #00 00:19:10.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:10.666 00:19:10.666 13:48:13 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:10.666 13:48:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:10.666 13:48:13 -- nvmf/common.sh@117 -- # sync 00:19:10.666 13:48:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:10.666 13:48:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:10.666 13:48:13 -- nvmf/common.sh@120 -- # set +e 00:19:10.666 13:48:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.666 13:48:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:10.666 rmmod nvme_rdma 00:19:10.666 rmmod nvme_fabrics 00:19:10.666 13:48:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.666 13:48:13 -- nvmf/common.sh@124 -- # set -e 00:19:10.666 13:48:13 -- nvmf/common.sh@125 -- # return 0 00:19:10.666 13:48:13 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:10.666 13:48:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:10.666 13:48:13 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:10.666 13:48:13 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:10.666 13:48:13 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:10.666 13:48:13 -- nvmf/common.sh@675 -- # echo 0 00:19:10.666 13:48:13 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:10.666 13:48:13 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:10.666 13:48:13 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:10.666 13:48:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:10.666 13:48:13 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:10.666 13:48:13 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:19:10.666 13:48:13 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:12.573 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:12.573 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:12.573 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:13.508 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:19:13.508 00:19:13.508 real 0m8.717s 00:19:13.508 user 0m2.390s 00:19:13.508 sys 0m4.320s 00:19:13.508 13:48:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.508 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.508 ************************************ 00:19:13.508 END TEST nvmf_identify_kernel_target 00:19:13.508 ************************************ 00:19:13.508 13:48:16 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:13.508 13:48:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:13.508 13:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.508 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.766 ************************************ 00:19:13.766 START TEST nvmf_auth 00:19:13.766 ************************************ 00:19:13.767 13:48:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:13.767 * Looking for test storage... 00:19:13.767 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:13.767 13:48:16 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.767 13:48:16 -- nvmf/common.sh@7 -- # uname -s 00:19:13.767 13:48:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.767 13:48:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.767 13:48:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.767 13:48:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.767 13:48:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.767 13:48:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.767 13:48:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.767 13:48:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.767 13:48:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.767 13:48:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.767 13:48:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:13.767 13:48:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:19:13.767 13:48:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.767 13:48:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.767 13:48:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.767 13:48:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.767 13:48:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:13.767 13:48:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.767 13:48:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.767 13:48:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.767 13:48:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.767 13:48:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.767 13:48:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.767 13:48:16 -- paths/export.sh@5 -- # export PATH 00:19:13.767 13:48:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.767 13:48:16 -- nvmf/common.sh@47 -- # : 0 00:19:13.767 13:48:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.767 13:48:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.767 13:48:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.767 13:48:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.767 13:48:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.767 13:48:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.767 13:48:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.767 13:48:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.767 13:48:16 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:13.767 13:48:16 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:13.767 13:48:16 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:13.767 13:48:16 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:13.767 13:48:16 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:13.767 13:48:16 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:13.767 13:48:16 -- host/auth.sh@21 -- # keys=() 00:19:13.767 13:48:16 -- host/auth.sh@77 -- # nvmftestinit 00:19:13.767 13:48:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:13.767 13:48:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.767 13:48:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:13.767 13:48:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:13.767 13:48:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:13.767 13:48:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.767 13:48:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.767 13:48:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.767 13:48:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:13.767 13:48:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:13.767 13:48:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:13.767 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:17.054 13:48:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.054 13:48:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.054 13:48:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.054 13:48:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.054 13:48:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.054 13:48:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.054 13:48:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.054 13:48:19 -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.054 13:48:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.054 13:48:19 -- nvmf/common.sh@296 -- # e810=() 00:19:17.054 13:48:19 -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.054 13:48:19 -- nvmf/common.sh@297 -- # x722=() 00:19:17.054 13:48:19 -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.054 13:48:19 -- nvmf/common.sh@298 -- # mlx=() 00:19:17.054 13:48:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.054 13:48:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.054 13:48:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.054 13:48:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:17.054 13:48:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:17.054 13:48:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:17.054 13:48:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:17.054 13:48:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:17.054 13:48:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.054 13:48:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.054 13:48:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:19:17.054 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:19:17.054 13:48:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:17.054 13:48:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:17.055 13:48:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:19:17.055 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:19:17.055 13:48:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:17.055 13:48:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.055 13:48:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.055 13:48:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:19:17.055 Found net devices under 0000:81:00.0: mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.055 13:48:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.055 13:48:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.055 13:48:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:19:17.055 Found net devices under 0000:81:00.1: mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.055 13:48:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:17.055 13:48:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:17.055 13:48:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:17.055 13:48:19 -- nvmf/common.sh@58 -- # uname 00:19:17.055 13:48:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:17.055 13:48:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:17.055 13:48:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:17.055 13:48:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:17.055 13:48:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:17.055 13:48:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:17.055 13:48:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:17.055 13:48:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:17.055 13:48:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:17.055 13:48:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:17.055 13:48:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:17.055 13:48:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:17.055 13:48:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:17.055 13:48:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:17.055 13:48:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:17.055 13:48:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@105 -- # continue 2 00:19:17.055 13:48:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@105 -- # continue 2 00:19:17.055 13:48:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:17.055 13:48:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:17.055 13:48:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:17.055 13:48:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:17.055 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:17.055 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:19:17.055 altname enp129s0f0np0 00:19:17.055 inet 192.168.100.8/24 scope global mlx_0_0 00:19:17.055 valid_lft forever preferred_lft forever 00:19:17.055 13:48:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:17.055 13:48:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:17.055 13:48:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:17.055 13:48:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:17.055 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:17.055 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:19:17.055 altname enp129s0f1np1 00:19:17.055 inet 192.168.100.9/24 scope global mlx_0_1 00:19:17.055 valid_lft forever preferred_lft forever 00:19:17.055 13:48:19 -- nvmf/common.sh@411 -- # return 0 00:19:17.055 13:48:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:17.055 13:48:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:17.055 13:48:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:17.055 13:48:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:17.055 13:48:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:17.055 13:48:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:17.055 13:48:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:17.055 13:48:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:17.055 13:48:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:17.055 13:48:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@105 -- # continue 2 00:19:17.055 13:48:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:17.055 13:48:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:17.055 13:48:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@105 -- # continue 2 00:19:17.055 13:48:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:17.055 13:48:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:17.055 13:48:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:17.055 13:48:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:17.055 13:48:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:17.055 13:48:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:17.055 192.168.100.9' 00:19:17.055 13:48:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:17.055 192.168.100.9' 00:19:17.055 13:48:19 -- nvmf/common.sh@446 -- # head -n 1 00:19:17.055 13:48:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:17.055 13:48:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:17.055 192.168.100.9' 00:19:17.055 13:48:19 -- nvmf/common.sh@447 -- # tail -n +2 00:19:17.055 13:48:19 -- nvmf/common.sh@447 -- # head -n 1 00:19:17.055 13:48:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:17.055 13:48:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:17.055 13:48:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:17.055 13:48:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:17.055 13:48:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:17.055 13:48:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:17.055 13:48:19 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:17.055 13:48:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:17.055 13:48:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.055 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.055 13:48:19 -- nvmf/common.sh@470 -- # nvmfpid=1193249 00:19:17.055 13:48:19 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:17.055 13:48:19 -- nvmf/common.sh@471 -- # waitforlisten 1193249 00:19:17.055 13:48:19 -- common/autotest_common.sh@817 -- # '[' -z 1193249 ']' 00:19:17.055 13:48:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.055 13:48:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.055 13:48:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.055 13:48:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.055 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.055 13:48:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.055 13:48:19 -- common/autotest_common.sh@850 -- # return 0 00:19:17.055 13:48:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:17.055 13:48:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:17.056 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.056 13:48:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.056 13:48:19 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:17.056 13:48:19 -- host/auth.sh@81 -- # gen_key null 32 00:19:17.056 13:48:19 -- host/auth.sh@53 -- # local digest len file key 00:19:17.056 13:48:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.056 13:48:19 -- host/auth.sh@54 -- # local -A digests 00:19:17.056 13:48:19 -- host/auth.sh@56 -- # digest=null 00:19:17.056 13:48:19 -- host/auth.sh@56 -- # len=32 00:19:17.056 13:48:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.056 13:48:19 -- host/auth.sh@57 -- # key=24a1489869812e82e4f99fd9e0bb6ec8 00:19:17.056 13:48:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:17.056 13:48:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.4pI 00:19:17.056 13:48:19 -- host/auth.sh@59 -- # format_dhchap_key 24a1489869812e82e4f99fd9e0bb6ec8 0 00:19:17.056 13:48:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 24a1489869812e82e4f99fd9e0bb6ec8 0 00:19:17.056 13:48:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # key=24a1489869812e82e4f99fd9e0bb6ec8 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # digest=0 00:19:17.056 13:48:19 -- nvmf/common.sh@694 -- # python - 00:19:17.056 13:48:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.4pI 00:19:17.056 13:48:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.4pI 00:19:17.056 13:48:19 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.4pI 00:19:17.056 13:48:19 -- host/auth.sh@82 -- # gen_key null 48 00:19:17.056 13:48:19 -- host/auth.sh@53 -- # local digest len file key 00:19:17.056 13:48:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.056 13:48:19 -- host/auth.sh@54 -- # local -A digests 00:19:17.056 13:48:19 -- host/auth.sh@56 -- # digest=null 00:19:17.056 13:48:19 -- host/auth.sh@56 -- # len=48 00:19:17.056 13:48:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.056 13:48:19 -- host/auth.sh@57 -- # key=6f6bc24d7f1991bfc194d2c80828ac63d39958e958d0376c 00:19:17.056 13:48:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:17.056 13:48:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.xEf 00:19:17.056 13:48:19 -- host/auth.sh@59 -- # format_dhchap_key 6f6bc24d7f1991bfc194d2c80828ac63d39958e958d0376c 0 00:19:17.056 13:48:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 6f6bc24d7f1991bfc194d2c80828ac63d39958e958d0376c 0 00:19:17.056 13:48:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # key=6f6bc24d7f1991bfc194d2c80828ac63d39958e958d0376c 00:19:17.056 13:48:19 -- nvmf/common.sh@693 -- # digest=0 00:19:17.056 13:48:19 -- nvmf/common.sh@694 -- # python - 00:19:17.314 13:48:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.xEf 00:19:17.314 13:48:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.xEf 00:19:17.314 13:48:19 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.xEf 00:19:17.314 13:48:19 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:17.314 13:48:19 -- host/auth.sh@53 -- # local digest len file key 00:19:17.314 13:48:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.314 13:48:19 -- host/auth.sh@54 -- # local -A digests 00:19:17.314 13:48:19 -- host/auth.sh@56 -- # digest=sha256 00:19:17.314 13:48:19 -- host/auth.sh@56 -- # len=32 00:19:17.314 13:48:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.314 13:48:19 -- host/auth.sh@57 -- # key=8bc960de6a1ecc51463d49de6367dfd5 00:19:17.314 13:48:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:17.314 13:48:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.pvw 00:19:17.314 13:48:19 -- host/auth.sh@59 -- # format_dhchap_key 8bc960de6a1ecc51463d49de6367dfd5 1 00:19:17.314 13:48:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 8bc960de6a1ecc51463d49de6367dfd5 1 00:19:17.314 13:48:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:17.314 13:48:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:17.314 13:48:19 -- nvmf/common.sh@693 -- # key=8bc960de6a1ecc51463d49de6367dfd5 00:19:17.314 13:48:19 -- nvmf/common.sh@693 -- # digest=1 00:19:17.314 13:48:19 -- nvmf/common.sh@694 -- # python - 00:19:17.314 13:48:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.pvw 00:19:17.314 13:48:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.pvw 00:19:17.314 13:48:19 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.pvw 00:19:17.314 13:48:19 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:17.314 13:48:19 -- host/auth.sh@53 -- # local digest len file key 00:19:17.314 13:48:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.315 13:48:19 -- host/auth.sh@54 -- # local -A digests 00:19:17.315 13:48:19 -- host/auth.sh@56 -- # digest=sha384 00:19:17.315 13:48:19 -- host/auth.sh@56 -- # len=48 00:19:17.315 13:48:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.315 13:48:19 -- host/auth.sh@57 -- # key=e7a525b13becd44565512b6e8fd3e97d1b315a0b42609a10 00:19:17.315 13:48:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:17.315 13:48:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.KL8 00:19:17.315 13:48:19 -- host/auth.sh@59 -- # format_dhchap_key e7a525b13becd44565512b6e8fd3e97d1b315a0b42609a10 2 00:19:17.315 13:48:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 e7a525b13becd44565512b6e8fd3e97d1b315a0b42609a10 2 00:19:17.315 13:48:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:17.315 13:48:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:17.315 13:48:19 -- nvmf/common.sh@693 -- # key=e7a525b13becd44565512b6e8fd3e97d1b315a0b42609a10 00:19:17.315 13:48:19 -- nvmf/common.sh@693 -- # digest=2 00:19:17.315 13:48:19 -- nvmf/common.sh@694 -- # python - 00:19:17.315 13:48:20 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.KL8 00:19:17.315 13:48:20 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.KL8 00:19:17.315 13:48:20 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.KL8 00:19:17.315 13:48:20 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:17.315 13:48:20 -- host/auth.sh@53 -- # local digest len file key 00:19:17.315 13:48:20 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.315 13:48:20 -- host/auth.sh@54 -- # local -A digests 00:19:17.315 13:48:20 -- host/auth.sh@56 -- # digest=sha512 00:19:17.315 13:48:20 -- host/auth.sh@56 -- # len=64 00:19:17.315 13:48:20 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:17.315 13:48:20 -- host/auth.sh@57 -- # key=6b3a327dfdc54e0892cf9774ac1fb887f502a8d52c9aa58f5b7df71fb0594fc4 00:19:17.315 13:48:20 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:17.315 13:48:20 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.2Xy 00:19:17.315 13:48:20 -- host/auth.sh@59 -- # format_dhchap_key 6b3a327dfdc54e0892cf9774ac1fb887f502a8d52c9aa58f5b7df71fb0594fc4 3 00:19:17.315 13:48:20 -- nvmf/common.sh@708 -- # format_key DHHC-1 6b3a327dfdc54e0892cf9774ac1fb887f502a8d52c9aa58f5b7df71fb0594fc4 3 00:19:17.315 13:48:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:17.315 13:48:20 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:17.315 13:48:20 -- nvmf/common.sh@693 -- # key=6b3a327dfdc54e0892cf9774ac1fb887f502a8d52c9aa58f5b7df71fb0594fc4 00:19:17.315 13:48:20 -- nvmf/common.sh@693 -- # digest=3 00:19:17.315 13:48:20 -- nvmf/common.sh@694 -- # python - 00:19:17.315 13:48:20 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.2Xy 00:19:17.315 13:48:20 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.2Xy 00:19:17.315 13:48:20 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.2Xy 00:19:17.315 13:48:20 -- host/auth.sh@87 -- # waitforlisten 1193249 00:19:17.315 13:48:20 -- common/autotest_common.sh@817 -- # '[' -z 1193249 ']' 00:19:17.315 13:48:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.315 13:48:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.315 13:48:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.315 13:48:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.315 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.573 13:48:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.573 13:48:20 -- common/autotest_common.sh@850 -- # return 0 00:19:17.573 13:48:20 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:17.573 13:48:20 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4pI 00:19:17.573 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.573 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.832 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.832 13:48:20 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:17.832 13:48:20 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xEf 00:19:17.832 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.833 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.833 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.833 13:48:20 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:17.833 13:48:20 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pvw 00:19:17.833 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.833 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.833 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.833 13:48:20 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:17.833 13:48:20 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KL8 00:19:17.833 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.833 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.833 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.833 13:48:20 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:17.833 13:48:20 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Xy 00:19:17.833 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.833 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:17.833 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.833 13:48:20 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:17.833 13:48:20 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:17.833 13:48:20 -- nvmf/common.sh@717 -- # local ip 00:19:17.833 13:48:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.833 13:48:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.833 13:48:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.833 13:48:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.833 13:48:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:17.833 13:48:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:17.833 13:48:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:17.833 13:48:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:17.833 13:48:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:17.833 13:48:20 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:19:17.833 13:48:20 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:19:17.833 13:48:20 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:17.833 13:48:20 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:17.833 13:48:20 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:17.833 13:48:20 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:17.833 13:48:20 -- nvmf/common.sh@628 -- # local block nvme 00:19:17.833 13:48:20 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:17.833 13:48:20 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:17.833 13:48:20 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:17.833 13:48:20 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:19.209 Waiting for block devices as requested 00:19:19.209 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:19:19.209 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:19.209 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:19.209 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:19.467 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:19.467 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:19.467 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:19.467 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:19.726 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:19.726 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:19.726 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:19.985 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:19.985 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:19.985 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:19.985 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:20.244 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:20.244 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:20.502 13:48:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:20.502 13:48:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:20.502 13:48:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:20.502 13:48:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:20.502 13:48:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:20.502 13:48:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:20.502 13:48:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:20.502 13:48:23 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:20.502 13:48:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:20.760 No valid GPT data, bailing 00:19:20.760 13:48:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:20.760 13:48:23 -- scripts/common.sh@391 -- # pt= 00:19:20.760 13:48:23 -- scripts/common.sh@392 -- # return 1 00:19:20.760 13:48:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:20.760 13:48:23 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:20.760 13:48:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:20.760 13:48:23 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:20.760 13:48:23 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:20.760 13:48:23 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:20.760 13:48:23 -- nvmf/common.sh@656 -- # echo 1 00:19:20.760 13:48:23 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:20.760 13:48:23 -- nvmf/common.sh@658 -- # echo 1 00:19:20.760 13:48:23 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:19:20.760 13:48:23 -- nvmf/common.sh@661 -- # echo rdma 00:19:20.760 13:48:23 -- nvmf/common.sh@662 -- # echo 4420 00:19:20.760 13:48:23 -- nvmf/common.sh@663 -- # echo ipv4 00:19:20.760 13:48:23 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:20.760 13:48:23 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -t rdma -s 4420 00:19:20.760 00:19:20.760 Discovery Log Number of Records 2, Generation counter 2 00:19:20.760 =====Discovery Log Entry 0====== 00:19:20.760 trtype: rdma 00:19:20.760 adrfam: ipv4 00:19:20.760 subtype: current discovery subsystem 00:19:20.760 treq: not specified, sq flow control disable supported 00:19:20.760 portid: 1 00:19:20.760 trsvcid: 4420 00:19:20.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:20.760 traddr: 192.168.100.8 00:19:20.760 eflags: none 00:19:20.760 rdma_prtype: not specified 00:19:20.760 rdma_qptype: connected 00:19:20.760 rdma_cms: rdma-cm 00:19:20.760 rdma_pkey: 0x0000 00:19:20.760 =====Discovery Log Entry 1====== 00:19:20.760 trtype: rdma 00:19:20.760 adrfam: ipv4 00:19:20.760 subtype: nvme subsystem 00:19:20.760 treq: not specified, sq flow control disable supported 00:19:20.760 portid: 1 00:19:20.760 trsvcid: 4420 00:19:20.760 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:20.760 traddr: 192.168.100.8 00:19:20.760 eflags: none 00:19:20.760 rdma_prtype: not specified 00:19:20.760 rdma_qptype: connected 00:19:20.760 rdma_cms: rdma-cm 00:19:20.760 rdma_pkey: 0x0000 00:19:20.760 13:48:23 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:20.760 13:48:23 -- host/auth.sh@37 -- # echo 0 00:19:20.760 13:48:23 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:20.760 13:48:23 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:20.760 13:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.760 13:48:23 -- host/auth.sh@44 -- # digest=sha256 00:19:20.760 13:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:20.760 13:48:23 -- host/auth.sh@44 -- # keyid=1 00:19:20.760 13:48:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:20.760 13:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:20.760 13:48:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:20.760 13:48:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:20.760 13:48:23 -- host/auth.sh@100 -- # IFS=, 00:19:20.760 13:48:23 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:20.760 13:48:23 -- host/auth.sh@100 -- # IFS=, 00:19:20.760 13:48:23 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.760 13:48:23 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:20.760 13:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.760 13:48:23 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:20.760 13:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.760 13:48:23 -- host/auth.sh@68 -- # keyid=1 00:19:20.760 13:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.760 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.761 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:20.761 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.761 13:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.761 13:48:23 -- nvmf/common.sh@717 -- # local ip 00:19:20.761 13:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.761 13:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.761 13:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.761 13:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.761 13:48:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:20.761 13:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:20.761 13:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:20.761 13:48:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:20.761 13:48:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:20.761 13:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:20.761 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.761 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.019 nvme0n1 00:19:21.019 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.019 13:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.019 13:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.019 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.019 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.019 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.019 13:48:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.019 13:48:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.019 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.019 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.019 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.019 13:48:23 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:21.019 13:48:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.019 13:48:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.019 13:48:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:21.019 13:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.019 13:48:23 -- host/auth.sh@44 -- # digest=sha256 00:19:21.019 13:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:21.019 13:48:23 -- host/auth.sh@44 -- # keyid=0 00:19:21.019 13:48:23 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:21.019 13:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:21.019 13:48:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:21.019 13:48:23 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:21.019 13:48:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:21.019 13:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.019 13:48:23 -- host/auth.sh@68 -- # digest=sha256 00:19:21.019 13:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:21.019 13:48:23 -- host/auth.sh@68 -- # keyid=0 00:19:21.019 13:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.019 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.019 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.019 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.019 13:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.019 13:48:23 -- nvmf/common.sh@717 -- # local ip 00:19:21.019 13:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.019 13:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.019 13:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.019 13:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.019 13:48:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:21.019 13:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.019 13:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.019 13:48:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:21.019 13:48:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:21.019 13:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:21.019 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.019 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.278 nvme0n1 00:19:21.278 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.278 13:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.278 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.278 13:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.278 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.278 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.278 13:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.278 13:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.278 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.278 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.278 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.278 13:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.278 13:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:21.278 13:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.278 13:48:24 -- host/auth.sh@44 -- # digest=sha256 00:19:21.278 13:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:21.278 13:48:24 -- host/auth.sh@44 -- # keyid=1 00:19:21.278 13:48:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:21.278 13:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:21.278 13:48:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:21.278 13:48:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:21.278 13:48:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:21.278 13:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.278 13:48:24 -- host/auth.sh@68 -- # digest=sha256 00:19:21.278 13:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:21.278 13:48:24 -- host/auth.sh@68 -- # keyid=1 00:19:21.278 13:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.278 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.278 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.537 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.537 13:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.537 13:48:24 -- nvmf/common.sh@717 -- # local ip 00:19:21.537 13:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.537 13:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.537 13:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.537 13:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.537 13:48:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:21.537 13:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.537 13:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.537 13:48:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:21.537 13:48:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:21.537 13:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:21.537 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.537 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.537 nvme0n1 00:19:21.537 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.537 13:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.537 13:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.537 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.537 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.537 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.537 13:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.537 13:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.537 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.537 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.795 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.795 13:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.795 13:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:21.795 13:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.795 13:48:24 -- host/auth.sh@44 -- # digest=sha256 00:19:21.795 13:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:21.795 13:48:24 -- host/auth.sh@44 -- # keyid=2 00:19:21.795 13:48:24 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:21.795 13:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:21.795 13:48:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:21.795 13:48:24 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:21.795 13:48:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:21.795 13:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.795 13:48:24 -- host/auth.sh@68 -- # digest=sha256 00:19:21.795 13:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:21.795 13:48:24 -- host/auth.sh@68 -- # keyid=2 00:19:21.795 13:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.795 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.795 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.795 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.795 13:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.795 13:48:24 -- nvmf/common.sh@717 -- # local ip 00:19:21.795 13:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.795 13:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.795 13:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.795 13:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.795 13:48:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:21.795 13:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.795 13:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.795 13:48:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:21.795 13:48:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:21.795 13:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.795 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.795 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.795 nvme0n1 00:19:21.795 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.795 13:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.795 13:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.795 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.795 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.053 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.053 13:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.053 13:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.053 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.053 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.053 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.053 13:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.053 13:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:22.053 13:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.053 13:48:24 -- host/auth.sh@44 -- # digest=sha256 00:19:22.053 13:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.053 13:48:24 -- host/auth.sh@44 -- # keyid=3 00:19:22.053 13:48:24 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:22.053 13:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:22.053 13:48:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.053 13:48:24 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:22.053 13:48:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:22.053 13:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.053 13:48:24 -- host/auth.sh@68 -- # digest=sha256 00:19:22.053 13:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.053 13:48:24 -- host/auth.sh@68 -- # keyid=3 00:19:22.053 13:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.053 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.053 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.053 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.053 13:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.053 13:48:24 -- nvmf/common.sh@717 -- # local ip 00:19:22.053 13:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.053 13:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.053 13:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.053 13:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.053 13:48:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:22.053 13:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.053 13:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.053 13:48:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:22.053 13:48:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:22.053 13:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:22.053 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.053 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.311 nvme0n1 00:19:22.311 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.311 13:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.311 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.311 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.311 13:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.311 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.311 13:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.311 13:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.311 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.311 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.311 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.311 13:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.311 13:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:22.311 13:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.311 13:48:24 -- host/auth.sh@44 -- # digest=sha256 00:19:22.311 13:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.311 13:48:24 -- host/auth.sh@44 -- # keyid=4 00:19:22.311 13:48:24 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:22.311 13:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:22.311 13:48:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.311 13:48:24 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:22.311 13:48:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:22.311 13:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.311 13:48:24 -- host/auth.sh@68 -- # digest=sha256 00:19:22.311 13:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.311 13:48:24 -- host/auth.sh@68 -- # keyid=4 00:19:22.311 13:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.311 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.311 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.311 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.311 13:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.311 13:48:24 -- nvmf/common.sh@717 -- # local ip 00:19:22.311 13:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.311 13:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.311 13:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.311 13:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.311 13:48:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:22.311 13:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.311 13:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.311 13:48:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:22.311 13:48:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:22.311 13:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:22.311 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.311 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.569 nvme0n1 00:19:22.569 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.569 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.569 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.569 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.569 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.569 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.569 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.569 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.569 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.569 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.569 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.569 13:48:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.569 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.569 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:22.569 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.569 13:48:25 -- host/auth.sh@44 -- # digest=sha256 00:19:22.569 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:22.569 13:48:25 -- host/auth.sh@44 -- # keyid=0 00:19:22.569 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:22.569 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:22.569 13:48:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:22.569 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:22.569 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:22.569 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.569 13:48:25 -- host/auth.sh@68 -- # digest=sha256 00:19:22.569 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:22.569 13:48:25 -- host/auth.sh@68 -- # keyid=0 00:19:22.569 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.569 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.569 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.569 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.569 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.569 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:22.569 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.569 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.569 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.569 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.569 13:48:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:22.569 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.570 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.570 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:22.570 13:48:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:22.570 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:22.570 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.570 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.827 nvme0n1 00:19:22.827 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.827 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.827 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.827 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.827 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.827 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.827 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.827 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.827 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.827 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.827 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.827 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.827 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:22.827 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.827 13:48:25 -- host/auth.sh@44 -- # digest=sha256 00:19:22.827 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:22.827 13:48:25 -- host/auth.sh@44 -- # keyid=1 00:19:22.827 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:22.827 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:22.827 13:48:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:22.827 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:22.827 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:22.827 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.827 13:48:25 -- host/auth.sh@68 -- # digest=sha256 00:19:22.827 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:22.827 13:48:25 -- host/auth.sh@68 -- # keyid=1 00:19:22.827 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.827 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.827 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.827 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.827 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.827 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:22.827 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.827 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.827 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.827 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.827 13:48:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:22.827 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.827 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.827 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:22.827 13:48:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:22.827 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:22.827 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.827 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.085 nvme0n1 00:19:23.085 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.085 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.085 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.085 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.085 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.085 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.085 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.085 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.085 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.085 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.343 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.343 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:23.343 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.343 13:48:25 -- host/auth.sh@44 -- # digest=sha256 00:19:23.343 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.343 13:48:25 -- host/auth.sh@44 -- # keyid=2 00:19:23.343 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:23.343 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:23.343 13:48:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.343 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:23.343 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:23.343 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.343 13:48:25 -- host/auth.sh@68 -- # digest=sha256 00:19:23.343 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.343 13:48:25 -- host/auth.sh@68 -- # keyid=2 00:19:23.343 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.343 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.343 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.343 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.343 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:23.343 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.343 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.343 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.343 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.343 13:48:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:23.343 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.343 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.343 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:23.343 13:48:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:23.343 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:23.343 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.343 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 nvme0n1 00:19:23.343 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.601 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.601 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.601 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.601 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.601 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.601 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.601 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.601 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.601 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.601 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.601 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.601 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:23.601 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.601 13:48:26 -- host/auth.sh@44 -- # digest=sha256 00:19:23.601 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.601 13:48:26 -- host/auth.sh@44 -- # keyid=3 00:19:23.601 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:23.601 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:23.601 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.601 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:23.601 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:23.601 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.601 13:48:26 -- host/auth.sh@68 -- # digest=sha256 00:19:23.601 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.601 13:48:26 -- host/auth.sh@68 -- # keyid=3 00:19:23.601 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.601 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.601 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.601 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.601 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.601 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:23.601 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.601 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.601 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.601 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.601 13:48:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:23.601 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.601 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.601 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:23.601 13:48:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:23.601 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:23.601 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.601 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.858 nvme0n1 00:19:23.858 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.858 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.858 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.858 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.858 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.858 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.858 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.858 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.858 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.858 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.858 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.858 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.858 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:23.858 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.858 13:48:26 -- host/auth.sh@44 -- # digest=sha256 00:19:23.858 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.858 13:48:26 -- host/auth.sh@44 -- # keyid=4 00:19:23.858 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:23.858 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:23.858 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.858 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:23.858 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:23.858 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.858 13:48:26 -- host/auth.sh@68 -- # digest=sha256 00:19:23.858 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.858 13:48:26 -- host/auth.sh@68 -- # keyid=4 00:19:23.858 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.858 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.858 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.858 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.858 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.858 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:23.858 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.858 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.858 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.858 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.858 13:48:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:23.858 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.858 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.858 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:23.858 13:48:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:23.858 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.859 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.859 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.166 nvme0n1 00:19:24.166 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.166 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.166 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.166 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.166 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.166 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.166 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.166 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.166 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.166 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.166 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.166 13:48:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.166 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.166 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:24.166 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.166 13:48:26 -- host/auth.sh@44 -- # digest=sha256 00:19:24.166 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:24.166 13:48:26 -- host/auth.sh@44 -- # keyid=0 00:19:24.166 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:24.166 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:24.166 13:48:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:24.166 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:24.166 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:24.166 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.166 13:48:26 -- host/auth.sh@68 -- # digest=sha256 00:19:24.166 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:24.166 13:48:26 -- host/auth.sh@68 -- # keyid=0 00:19:24.166 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.166 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.166 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.166 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.166 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.166 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:24.166 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.166 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.166 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.166 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.166 13:48:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:24.166 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.166 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.166 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:24.166 13:48:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:24.166 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:24.166 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.166 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.732 nvme0n1 00:19:24.732 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.732 13:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.732 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.732 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.732 13:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.732 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.732 13:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.732 13:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.732 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.732 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.732 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.732 13:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.732 13:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:24.732 13:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.732 13:48:27 -- host/auth.sh@44 -- # digest=sha256 00:19:24.732 13:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:24.732 13:48:27 -- host/auth.sh@44 -- # keyid=1 00:19:24.732 13:48:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:24.732 13:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:24.732 13:48:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:24.732 13:48:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:24.732 13:48:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:24.732 13:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.732 13:48:27 -- host/auth.sh@68 -- # digest=sha256 00:19:24.732 13:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:24.732 13:48:27 -- host/auth.sh@68 -- # keyid=1 00:19:24.732 13:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.732 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.732 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.732 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.732 13:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.732 13:48:27 -- nvmf/common.sh@717 -- # local ip 00:19:24.732 13:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.732 13:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.732 13:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.732 13:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.732 13:48:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:24.732 13:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.732 13:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.732 13:48:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:24.732 13:48:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:24.732 13:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:24.732 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.732 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.299 nvme0n1 00:19:25.299 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.299 13:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.299 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.299 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.299 13:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.299 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.299 13:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.299 13:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.299 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.299 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.299 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.299 13:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.299 13:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:25.299 13:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.299 13:48:27 -- host/auth.sh@44 -- # digest=sha256 00:19:25.299 13:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.299 13:48:27 -- host/auth.sh@44 -- # keyid=2 00:19:25.299 13:48:27 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:25.299 13:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:25.299 13:48:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.299 13:48:27 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:25.299 13:48:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:19:25.299 13:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.299 13:48:27 -- host/auth.sh@68 -- # digest=sha256 00:19:25.299 13:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.299 13:48:27 -- host/auth.sh@68 -- # keyid=2 00:19:25.299 13:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.299 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.299 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.299 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.299 13:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.299 13:48:27 -- nvmf/common.sh@717 -- # local ip 00:19:25.299 13:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.299 13:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.299 13:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.299 13:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.299 13:48:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:25.299 13:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.299 13:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.299 13:48:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:25.299 13:48:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:25.299 13:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.299 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.299 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.557 nvme0n1 00:19:25.557 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.557 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.557 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.557 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.557 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.557 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.557 13:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.816 13:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.816 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.816 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.816 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.816 13:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.816 13:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:25.816 13:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.816 13:48:28 -- host/auth.sh@44 -- # digest=sha256 00:19:25.816 13:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.816 13:48:28 -- host/auth.sh@44 -- # keyid=3 00:19:25.816 13:48:28 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:25.816 13:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:25.816 13:48:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.816 13:48:28 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:25.816 13:48:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:19:25.816 13:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.816 13:48:28 -- host/auth.sh@68 -- # digest=sha256 00:19:25.816 13:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.816 13:48:28 -- host/auth.sh@68 -- # keyid=3 00:19:25.816 13:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.816 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.816 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.816 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.816 13:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.816 13:48:28 -- nvmf/common.sh@717 -- # local ip 00:19:25.816 13:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.816 13:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.816 13:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.816 13:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.816 13:48:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:25.816 13:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.816 13:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.816 13:48:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:25.816 13:48:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:25.816 13:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:25.816 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.816 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.074 nvme0n1 00:19:26.074 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.074 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.074 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.074 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.074 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.074 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.074 13:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.074 13:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.074 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.074 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.333 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.333 13:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.333 13:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:26.333 13:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.333 13:48:28 -- host/auth.sh@44 -- # digest=sha256 00:19:26.333 13:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.333 13:48:28 -- host/auth.sh@44 -- # keyid=4 00:19:26.333 13:48:28 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:26.333 13:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:26.333 13:48:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:26.333 13:48:28 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:26.333 13:48:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:19:26.333 13:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.333 13:48:28 -- host/auth.sh@68 -- # digest=sha256 00:19:26.333 13:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:26.333 13:48:28 -- host/auth.sh@68 -- # keyid=4 00:19:26.333 13:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.333 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.333 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.333 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.333 13:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.333 13:48:28 -- nvmf/common.sh@717 -- # local ip 00:19:26.333 13:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.333 13:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.333 13:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.333 13:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.333 13:48:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:26.333 13:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:26.333 13:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:26.333 13:48:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:26.333 13:48:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:26.333 13:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.333 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.333 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.592 nvme0n1 00:19:26.592 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.592 13:48:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.592 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.592 13:48:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.592 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.592 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.592 13:48:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.592 13:48:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.592 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.592 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.592 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.592 13:48:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.592 13:48:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.592 13:48:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:26.592 13:48:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.592 13:48:29 -- host/auth.sh@44 -- # digest=sha256 00:19:26.592 13:48:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:26.592 13:48:29 -- host/auth.sh@44 -- # keyid=0 00:19:26.592 13:48:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:26.592 13:48:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:26.592 13:48:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:26.592 13:48:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:26.592 13:48:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:19:26.592 13:48:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.592 13:48:29 -- host/auth.sh@68 -- # digest=sha256 00:19:26.592 13:48:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:26.592 13:48:29 -- host/auth.sh@68 -- # keyid=0 00:19:26.592 13:48:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.592 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.592 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.592 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.592 13:48:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.592 13:48:29 -- nvmf/common.sh@717 -- # local ip 00:19:26.592 13:48:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.592 13:48:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.592 13:48:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.592 13:48:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.592 13:48:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:26.592 13:48:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:26.592 13:48:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:26.592 13:48:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:26.592 13:48:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:26.592 13:48:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:26.592 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.592 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.526 nvme0n1 00:19:27.526 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.526 13:48:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.526 13:48:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.526 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.526 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.526 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.526 13:48:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.526 13:48:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.526 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.526 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.526 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.526 13:48:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.526 13:48:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:27.526 13:48:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.526 13:48:30 -- host/auth.sh@44 -- # digest=sha256 00:19:27.526 13:48:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.526 13:48:30 -- host/auth.sh@44 -- # keyid=1 00:19:27.526 13:48:30 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:27.526 13:48:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:27.526 13:48:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:27.526 13:48:30 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:27.526 13:48:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:19:27.526 13:48:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.526 13:48:30 -- host/auth.sh@68 -- # digest=sha256 00:19:27.526 13:48:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:27.526 13:48:30 -- host/auth.sh@68 -- # keyid=1 00:19:27.526 13:48:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.526 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.526 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.526 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.526 13:48:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.526 13:48:30 -- nvmf/common.sh@717 -- # local ip 00:19:27.526 13:48:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.526 13:48:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.526 13:48:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.526 13:48:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.526 13:48:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:27.526 13:48:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:27.526 13:48:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:27.526 13:48:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:27.526 13:48:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:27.526 13:48:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:27.526 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.526 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.091 nvme0n1 00:19:28.091 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.091 13:48:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.091 13:48:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.091 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.091 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.091 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.091 13:48:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.091 13:48:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.091 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.091 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.091 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.091 13:48:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.091 13:48:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:28.091 13:48:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.091 13:48:30 -- host/auth.sh@44 -- # digest=sha256 00:19:28.091 13:48:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.091 13:48:30 -- host/auth.sh@44 -- # keyid=2 00:19:28.091 13:48:30 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:28.091 13:48:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:28.091 13:48:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:28.091 13:48:30 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:28.091 13:48:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:19:28.091 13:48:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.091 13:48:30 -- host/auth.sh@68 -- # digest=sha256 00:19:28.091 13:48:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:28.091 13:48:30 -- host/auth.sh@68 -- # keyid=2 00:19:28.091 13:48:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.091 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.091 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.091 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.091 13:48:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.091 13:48:30 -- nvmf/common.sh@717 -- # local ip 00:19:28.091 13:48:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.091 13:48:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.091 13:48:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.091 13:48:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.091 13:48:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:28.091 13:48:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:28.091 13:48:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:28.091 13:48:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:28.091 13:48:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:28.091 13:48:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:28.091 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.091 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 nvme0n1 00:19:29.024 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.024 13:48:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.024 13:48:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:29.024 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.024 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.024 13:48:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.024 13:48:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.024 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.024 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:29.024 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.024 13:48:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:29.025 13:48:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:29.025 13:48:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:29.025 13:48:31 -- host/auth.sh@44 -- # digest=sha256 00:19:29.025 13:48:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.025 13:48:31 -- host/auth.sh@44 -- # keyid=3 00:19:29.025 13:48:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:29.025 13:48:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:29.025 13:48:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:29.025 13:48:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:29.025 13:48:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:19:29.025 13:48:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:29.025 13:48:31 -- host/auth.sh@68 -- # digest=sha256 00:19:29.025 13:48:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:29.025 13:48:31 -- host/auth.sh@68 -- # keyid=3 00:19:29.025 13:48:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.025 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.025 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:29.025 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.025 13:48:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:29.025 13:48:31 -- nvmf/common.sh@717 -- # local ip 00:19:29.025 13:48:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:29.025 13:48:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:29.025 13:48:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.025 13:48:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.025 13:48:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:29.025 13:48:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:29.025 13:48:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:29.025 13:48:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:29.025 13:48:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:29.025 13:48:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:29.025 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.025 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:29.589 nvme0n1 00:19:29.589 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.589 13:48:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.589 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.589 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.589 13:48:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:29.589 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.589 13:48:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.589 13:48:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.589 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.589 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.847 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.847 13:48:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:29.847 13:48:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:29.847 13:48:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:29.847 13:48:32 -- host/auth.sh@44 -- # digest=sha256 00:19:29.847 13:48:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.847 13:48:32 -- host/auth.sh@44 -- # keyid=4 00:19:29.847 13:48:32 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:29.847 13:48:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:29.847 13:48:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:29.847 13:48:32 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:29.847 13:48:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:19:29.847 13:48:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:29.847 13:48:32 -- host/auth.sh@68 -- # digest=sha256 00:19:29.847 13:48:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:29.847 13:48:32 -- host/auth.sh@68 -- # keyid=4 00:19:29.847 13:48:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.847 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.847 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.847 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.847 13:48:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:29.847 13:48:32 -- nvmf/common.sh@717 -- # local ip 00:19:29.847 13:48:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:29.847 13:48:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:29.847 13:48:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.847 13:48:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.847 13:48:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:29.847 13:48:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:29.847 13:48:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:29.847 13:48:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:29.847 13:48:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:29.847 13:48:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.847 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.847 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:30.412 nvme0n1 00:19:30.412 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.412 13:48:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.412 13:48:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:30.412 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.412 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.412 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.412 13:48:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.412 13:48:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.412 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.412 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.412 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.412 13:48:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.412 13:48:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:30.412 13:48:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:30.412 13:48:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:30.412 13:48:33 -- host/auth.sh@44 -- # digest=sha256 00:19:30.412 13:48:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.412 13:48:33 -- host/auth.sh@44 -- # keyid=0 00:19:30.412 13:48:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:30.412 13:48:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:30.412 13:48:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:30.412 13:48:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:30.412 13:48:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:30.412 13:48:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:30.412 13:48:33 -- host/auth.sh@68 -- # digest=sha256 00:19:30.412 13:48:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:30.412 13:48:33 -- host/auth.sh@68 -- # keyid=0 00:19:30.412 13:48:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.412 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.412 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.412 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.412 13:48:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:30.412 13:48:33 -- nvmf/common.sh@717 -- # local ip 00:19:30.412 13:48:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:30.412 13:48:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:30.412 13:48:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.412 13:48:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.412 13:48:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:30.412 13:48:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:30.412 13:48:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:30.412 13:48:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:30.412 13:48:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:30.412 13:48:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:30.412 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.412 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 nvme0n1 00:19:31.786 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.786 13:48:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.786 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.786 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 13:48:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:31.786 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.786 13:48:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.786 13:48:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.786 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.786 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.786 13:48:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:31.786 13:48:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:31.786 13:48:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:31.786 13:48:34 -- host/auth.sh@44 -- # digest=sha256 00:19:31.786 13:48:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.786 13:48:34 -- host/auth.sh@44 -- # keyid=1 00:19:31.786 13:48:34 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:31.786 13:48:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:31.786 13:48:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:31.786 13:48:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:31.786 13:48:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:31.786 13:48:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:31.786 13:48:34 -- host/auth.sh@68 -- # digest=sha256 00:19:31.786 13:48:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:31.786 13:48:34 -- host/auth.sh@68 -- # keyid=1 00:19:31.786 13:48:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.786 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.786 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.786 13:48:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:31.786 13:48:34 -- nvmf/common.sh@717 -- # local ip 00:19:31.786 13:48:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:31.786 13:48:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:31.786 13:48:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.786 13:48:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.786 13:48:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:31.786 13:48:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:31.786 13:48:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:31.786 13:48:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:31.786 13:48:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:31.786 13:48:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:31.786 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.786 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:33.160 nvme0n1 00:19:33.160 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.160 13:48:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.160 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.160 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.160 13:48:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:33.160 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.160 13:48:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.160 13:48:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.160 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.160 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.160 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.160 13:48:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:33.160 13:48:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:33.160 13:48:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:33.160 13:48:35 -- host/auth.sh@44 -- # digest=sha256 00:19:33.160 13:48:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.160 13:48:35 -- host/auth.sh@44 -- # keyid=2 00:19:33.161 13:48:35 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:33.161 13:48:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:33.161 13:48:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:33.161 13:48:35 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:33.161 13:48:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:33.161 13:48:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:33.161 13:48:35 -- host/auth.sh@68 -- # digest=sha256 00:19:33.161 13:48:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:33.161 13:48:35 -- host/auth.sh@68 -- # keyid=2 00:19:33.161 13:48:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.161 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.161 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.161 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.161 13:48:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:33.161 13:48:35 -- nvmf/common.sh@717 -- # local ip 00:19:33.161 13:48:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:33.161 13:48:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:33.161 13:48:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.161 13:48:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.161 13:48:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:33.161 13:48:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.161 13:48:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.161 13:48:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:33.161 13:48:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:33.161 13:48:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:33.161 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.161 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:34.094 nvme0n1 00:19:34.094 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.094 13:48:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.094 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.094 13:48:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.094 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:34.094 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.094 13:48:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.094 13:48:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.094 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.094 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:34.352 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.352 13:48:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.352 13:48:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:34.352 13:48:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.352 13:48:36 -- host/auth.sh@44 -- # digest=sha256 00:19:34.352 13:48:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:34.352 13:48:36 -- host/auth.sh@44 -- # keyid=3 00:19:34.352 13:48:36 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:34.352 13:48:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:34.352 13:48:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:34.352 13:48:36 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:34.352 13:48:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:34.352 13:48:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.352 13:48:36 -- host/auth.sh@68 -- # digest=sha256 00:19:34.352 13:48:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:34.352 13:48:36 -- host/auth.sh@68 -- # keyid=3 00:19:34.352 13:48:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.352 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.352 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:34.352 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.352 13:48:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.352 13:48:36 -- nvmf/common.sh@717 -- # local ip 00:19:34.352 13:48:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.352 13:48:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.352 13:48:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.352 13:48:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.352 13:48:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:34.352 13:48:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.352 13:48:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.352 13:48:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:34.352 13:48:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:34.352 13:48:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:34.352 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.352 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:35.724 nvme0n1 00:19:35.724 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.724 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.724 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.724 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.724 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.724 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.724 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.724 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.724 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.724 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.724 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.724 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.724 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:35.724 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.724 13:48:38 -- host/auth.sh@44 -- # digest=sha256 00:19:35.724 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:35.724 13:48:38 -- host/auth.sh@44 -- # keyid=4 00:19:35.724 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:35.724 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:35.724 13:48:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:35.724 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:35.724 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:35.724 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.724 13:48:38 -- host/auth.sh@68 -- # digest=sha256 00:19:35.724 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:35.724 13:48:38 -- host/auth.sh@68 -- # keyid=4 00:19:35.724 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.724 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.724 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.724 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.724 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.724 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:35.724 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.724 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.724 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.724 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.724 13:48:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:35.724 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.724 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.724 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:35.724 13:48:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:35.724 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.724 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.724 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.658 nvme0n1 00:19:36.658 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.658 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.658 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.658 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.658 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.658 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.658 13:48:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.658 13:48:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.658 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.658 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.916 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.916 13:48:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:36.916 13:48:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.916 13:48:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.916 13:48:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:36.916 13:48:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.916 13:48:39 -- host/auth.sh@44 -- # digest=sha384 00:19:36.916 13:48:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.916 13:48:39 -- host/auth.sh@44 -- # keyid=0 00:19:36.916 13:48:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:36.916 13:48:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:36.916 13:48:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:36.916 13:48:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:36.917 13:48:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:36.917 13:48:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.917 13:48:39 -- host/auth.sh@68 -- # digest=sha384 00:19:36.917 13:48:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:36.917 13:48:39 -- host/auth.sh@68 -- # keyid=0 00:19:36.917 13:48:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.917 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.917 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.917 13:48:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.917 13:48:39 -- nvmf/common.sh@717 -- # local ip 00:19:36.917 13:48:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.917 13:48:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.917 13:48:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.917 13:48:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.917 13:48:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:36.917 13:48:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.917 13:48:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.917 13:48:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:36.917 13:48:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:36.917 13:48:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:36.917 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.917 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 nvme0n1 00:19:36.917 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.917 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.917 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.917 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.917 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.917 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.175 13:48:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.175 13:48:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.175 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.175 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.175 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.175 13:48:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.175 13:48:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:37.175 13:48:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.175 13:48:39 -- host/auth.sh@44 -- # digest=sha384 00:19:37.175 13:48:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.175 13:48:39 -- host/auth.sh@44 -- # keyid=1 00:19:37.175 13:48:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:37.175 13:48:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:37.175 13:48:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.175 13:48:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:37.175 13:48:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:37.175 13:48:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.175 13:48:39 -- host/auth.sh@68 -- # digest=sha384 00:19:37.175 13:48:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.175 13:48:39 -- host/auth.sh@68 -- # keyid=1 00:19:37.175 13:48:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.175 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.175 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.175 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.175 13:48:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.175 13:48:39 -- nvmf/common.sh@717 -- # local ip 00:19:37.175 13:48:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.175 13:48:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.176 13:48:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.176 13:48:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.176 13:48:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:37.176 13:48:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.176 13:48:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.176 13:48:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:37.176 13:48:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:37.176 13:48:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:37.176 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.176 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.176 nvme0n1 00:19:37.176 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.176 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.176 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.176 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.176 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.436 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.436 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.436 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.436 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.436 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.436 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.436 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.436 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:37.436 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.436 13:48:40 -- host/auth.sh@44 -- # digest=sha384 00:19:37.436 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.436 13:48:40 -- host/auth.sh@44 -- # keyid=2 00:19:37.436 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:37.436 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:37.436 13:48:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.436 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:37.436 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:37.436 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.436 13:48:40 -- host/auth.sh@68 -- # digest=sha384 00:19:37.436 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.436 13:48:40 -- host/auth.sh@68 -- # keyid=2 00:19:37.436 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.436 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.436 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.436 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.436 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.436 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:37.436 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.436 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.436 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.436 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.436 13:48:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:37.436 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.436 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.436 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:37.436 13:48:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:37.436 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:37.436 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.436 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.698 nvme0n1 00:19:37.698 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.698 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.698 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.698 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.698 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.698 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.698 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.698 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.698 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.698 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.698 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.698 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.698 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:37.698 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.698 13:48:40 -- host/auth.sh@44 -- # digest=sha384 00:19:37.698 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.698 13:48:40 -- host/auth.sh@44 -- # keyid=3 00:19:37.699 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:37.699 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:37.699 13:48:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.699 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:37.699 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:37.699 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.699 13:48:40 -- host/auth.sh@68 -- # digest=sha384 00:19:37.699 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.699 13:48:40 -- host/auth.sh@68 -- # keyid=3 00:19:37.699 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.699 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.699 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.699 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.699 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.699 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:37.699 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.699 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.699 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.699 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.699 13:48:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:37.699 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.699 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.699 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:37.699 13:48:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:37.699 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:37.699 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.699 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 nvme0n1 00:19:37.956 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.956 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.956 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.956 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.956 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.956 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.956 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.956 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.956 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.956 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.956 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:37.956 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.956 13:48:40 -- host/auth.sh@44 -- # digest=sha384 00:19:37.956 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.956 13:48:40 -- host/auth.sh@44 -- # keyid=4 00:19:37.956 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:37.956 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:37.956 13:48:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:37.956 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:37.956 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:37.956 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.956 13:48:40 -- host/auth.sh@68 -- # digest=sha384 00:19:37.956 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:37.956 13:48:40 -- host/auth.sh@68 -- # keyid=4 00:19:37.956 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.956 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.956 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.956 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.956 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:37.956 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.956 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.956 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.956 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.956 13:48:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:37.956 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.956 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.957 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:37.957 13:48:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:37.957 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.957 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.957 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.214 nvme0n1 00:19:38.214 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.214 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.214 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.214 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.214 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.214 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.214 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.214 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.214 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.214 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.214 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.214 13:48:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.214 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.214 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:38.214 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.214 13:48:40 -- host/auth.sh@44 -- # digest=sha384 00:19:38.214 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.214 13:48:40 -- host/auth.sh@44 -- # keyid=0 00:19:38.214 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:38.214 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:38.214 13:48:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:38.214 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:38.214 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:38.214 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.214 13:48:40 -- host/auth.sh@68 -- # digest=sha384 00:19:38.214 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:38.214 13:48:40 -- host/auth.sh@68 -- # keyid=0 00:19:38.214 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.214 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.214 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.214 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.214 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.214 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:38.214 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.214 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.214 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.214 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.214 13:48:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:38.214 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.214 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.214 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:38.214 13:48:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:38.214 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:38.214 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.214 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 nvme0n1 00:19:38.473 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.473 13:48:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.473 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.473 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 13:48:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.473 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.473 13:48:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.473 13:48:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.473 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.473 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.473 13:48:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.473 13:48:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:38.473 13:48:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.473 13:48:41 -- host/auth.sh@44 -- # digest=sha384 00:19:38.473 13:48:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.473 13:48:41 -- host/auth.sh@44 -- # keyid=1 00:19:38.473 13:48:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:38.473 13:48:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:38.473 13:48:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:38.473 13:48:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:38.473 13:48:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:38.473 13:48:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.473 13:48:41 -- host/auth.sh@68 -- # digest=sha384 00:19:38.473 13:48:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:38.473 13:48:41 -- host/auth.sh@68 -- # keyid=1 00:19:38.473 13:48:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.473 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.473 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.473 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.473 13:48:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.473 13:48:41 -- nvmf/common.sh@717 -- # local ip 00:19:38.473 13:48:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.473 13:48:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.473 13:48:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.473 13:48:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.473 13:48:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:38.473 13:48:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.473 13:48:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.473 13:48:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:38.473 13:48:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:38.473 13:48:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:38.473 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.473 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.731 nvme0n1 00:19:38.731 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.731 13:48:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.731 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.731 13:48:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.731 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.731 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.988 13:48:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.988 13:48:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.988 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.988 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.988 13:48:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.988 13:48:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:38.988 13:48:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.988 13:48:41 -- host/auth.sh@44 -- # digest=sha384 00:19:38.988 13:48:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.988 13:48:41 -- host/auth.sh@44 -- # keyid=2 00:19:38.988 13:48:41 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:38.988 13:48:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:38.988 13:48:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:38.988 13:48:41 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:38.988 13:48:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:38.988 13:48:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.988 13:48:41 -- host/auth.sh@68 -- # digest=sha384 00:19:38.988 13:48:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:38.988 13:48:41 -- host/auth.sh@68 -- # keyid=2 00:19:38.988 13:48:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.988 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.988 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.988 13:48:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.988 13:48:41 -- nvmf/common.sh@717 -- # local ip 00:19:38.988 13:48:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.988 13:48:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.988 13:48:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.988 13:48:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.988 13:48:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:38.988 13:48:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.988 13:48:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.988 13:48:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:38.988 13:48:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:38.988 13:48:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:38.988 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.988 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 nvme0n1 00:19:39.245 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.245 13:48:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.245 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.245 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 13:48:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.245 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.245 13:48:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.245 13:48:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.245 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.245 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.245 13:48:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.245 13:48:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:39.245 13:48:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.245 13:48:41 -- host/auth.sh@44 -- # digest=sha384 00:19:39.245 13:48:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.245 13:48:41 -- host/auth.sh@44 -- # keyid=3 00:19:39.245 13:48:41 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:39.245 13:48:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:39.245 13:48:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:39.245 13:48:41 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:39.245 13:48:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:39.245 13:48:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.245 13:48:41 -- host/auth.sh@68 -- # digest=sha384 00:19:39.245 13:48:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:39.245 13:48:41 -- host/auth.sh@68 -- # keyid=3 00:19:39.245 13:48:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.245 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.245 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.245 13:48:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.245 13:48:41 -- nvmf/common.sh@717 -- # local ip 00:19:39.245 13:48:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.245 13:48:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.245 13:48:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.245 13:48:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.245 13:48:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:39.245 13:48:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.245 13:48:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.245 13:48:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:39.245 13:48:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:39.245 13:48:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:39.245 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.245 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.503 nvme0n1 00:19:39.503 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.503 13:48:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.503 13:48:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.503 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.503 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.503 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.503 13:48:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.503 13:48:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.503 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.503 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.503 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.503 13:48:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.503 13:48:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:39.503 13:48:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.503 13:48:42 -- host/auth.sh@44 -- # digest=sha384 00:19:39.503 13:48:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.503 13:48:42 -- host/auth.sh@44 -- # keyid=4 00:19:39.503 13:48:42 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:39.503 13:48:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:39.503 13:48:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:39.503 13:48:42 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:39.503 13:48:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:39.503 13:48:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.503 13:48:42 -- host/auth.sh@68 -- # digest=sha384 00:19:39.503 13:48:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:39.503 13:48:42 -- host/auth.sh@68 -- # keyid=4 00:19:39.503 13:48:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.503 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.503 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.503 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.503 13:48:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.503 13:48:42 -- nvmf/common.sh@717 -- # local ip 00:19:39.503 13:48:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.503 13:48:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.503 13:48:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.503 13:48:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.504 13:48:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:39.504 13:48:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.504 13:48:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.504 13:48:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:39.504 13:48:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:39.504 13:48:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.504 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.504 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.761 nvme0n1 00:19:39.761 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.761 13:48:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.761 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.761 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.761 13:48:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.761 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.761 13:48:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.761 13:48:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.761 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.761 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.018 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.018 13:48:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.018 13:48:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.018 13:48:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:40.018 13:48:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.018 13:48:42 -- host/auth.sh@44 -- # digest=sha384 00:19:40.018 13:48:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.018 13:48:42 -- host/auth.sh@44 -- # keyid=0 00:19:40.018 13:48:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:40.018 13:48:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:40.018 13:48:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:40.018 13:48:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:40.018 13:48:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:40.018 13:48:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.018 13:48:42 -- host/auth.sh@68 -- # digest=sha384 00:19:40.018 13:48:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:40.018 13:48:42 -- host/auth.sh@68 -- # keyid=0 00:19:40.018 13:48:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.018 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.018 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.018 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.018 13:48:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.018 13:48:42 -- nvmf/common.sh@717 -- # local ip 00:19:40.018 13:48:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.018 13:48:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.018 13:48:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.018 13:48:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.018 13:48:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:40.018 13:48:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.018 13:48:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.018 13:48:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:40.018 13:48:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:40.018 13:48:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:40.018 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.018 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.275 nvme0n1 00:19:40.275 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.275 13:48:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.275 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.275 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.275 13:48:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.275 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.275 13:48:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.275 13:48:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.275 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.275 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.532 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.532 13:48:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.532 13:48:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:40.532 13:48:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.532 13:48:43 -- host/auth.sh@44 -- # digest=sha384 00:19:40.532 13:48:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.532 13:48:43 -- host/auth.sh@44 -- # keyid=1 00:19:40.532 13:48:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:40.532 13:48:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:40.532 13:48:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:40.532 13:48:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:40.532 13:48:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:40.532 13:48:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.532 13:48:43 -- host/auth.sh@68 -- # digest=sha384 00:19:40.532 13:48:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:40.532 13:48:43 -- host/auth.sh@68 -- # keyid=1 00:19:40.532 13:48:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.532 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.532 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.532 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.532 13:48:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.532 13:48:43 -- nvmf/common.sh@717 -- # local ip 00:19:40.532 13:48:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.532 13:48:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.532 13:48:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.532 13:48:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.532 13:48:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:40.532 13:48:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.532 13:48:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.532 13:48:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:40.532 13:48:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:40.532 13:48:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:40.532 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.532 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 nvme0n1 00:19:40.789 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.789 13:48:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.789 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.789 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 13:48:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.789 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.789 13:48:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.789 13:48:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.789 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.789 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.789 13:48:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.789 13:48:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:40.789 13:48:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.789 13:48:43 -- host/auth.sh@44 -- # digest=sha384 00:19:40.789 13:48:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.789 13:48:43 -- host/auth.sh@44 -- # keyid=2 00:19:40.789 13:48:43 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:40.789 13:48:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:40.789 13:48:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:40.789 13:48:43 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:40.789 13:48:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:40.789 13:48:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.789 13:48:43 -- host/auth.sh@68 -- # digest=sha384 00:19:40.789 13:48:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:40.789 13:48:43 -- host/auth.sh@68 -- # keyid=2 00:19:40.789 13:48:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.789 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.789 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.789 13:48:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.789 13:48:43 -- nvmf/common.sh@717 -- # local ip 00:19:40.789 13:48:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.789 13:48:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.789 13:48:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.789 13:48:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.789 13:48:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:40.789 13:48:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.789 13:48:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.789 13:48:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:40.789 13:48:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:40.789 13:48:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:40.789 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.789 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 nvme0n1 00:19:41.355 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.355 13:48:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.355 13:48:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.355 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.355 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.355 13:48:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.355 13:48:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.355 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.355 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.355 13:48:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.355 13:48:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:41.355 13:48:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.355 13:48:44 -- host/auth.sh@44 -- # digest=sha384 00:19:41.355 13:48:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.355 13:48:44 -- host/auth.sh@44 -- # keyid=3 00:19:41.355 13:48:44 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:41.355 13:48:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:41.355 13:48:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:41.355 13:48:44 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:41.355 13:48:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:41.355 13:48:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.355 13:48:44 -- host/auth.sh@68 -- # digest=sha384 00:19:41.355 13:48:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:41.355 13:48:44 -- host/auth.sh@68 -- # keyid=3 00:19:41.355 13:48:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.355 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.355 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.355 13:48:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.355 13:48:44 -- nvmf/common.sh@717 -- # local ip 00:19:41.355 13:48:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.355 13:48:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.355 13:48:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.355 13:48:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.355 13:48:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:41.355 13:48:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.355 13:48:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.355 13:48:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:41.355 13:48:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:41.355 13:48:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:41.355 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.355 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.921 nvme0n1 00:19:41.921 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.921 13:48:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.921 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.921 13:48:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.921 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.921 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.921 13:48:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.921 13:48:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.921 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.921 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.921 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.921 13:48:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.921 13:48:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:41.921 13:48:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.921 13:48:44 -- host/auth.sh@44 -- # digest=sha384 00:19:41.921 13:48:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.921 13:48:44 -- host/auth.sh@44 -- # keyid=4 00:19:41.921 13:48:44 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:41.921 13:48:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:41.921 13:48:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:41.921 13:48:44 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:41.921 13:48:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:41.922 13:48:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.922 13:48:44 -- host/auth.sh@68 -- # digest=sha384 00:19:41.922 13:48:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:41.922 13:48:44 -- host/auth.sh@68 -- # keyid=4 00:19:41.922 13:48:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.922 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.922 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.922 13:48:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.922 13:48:44 -- nvmf/common.sh@717 -- # local ip 00:19:41.922 13:48:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.922 13:48:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.922 13:48:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.922 13:48:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.922 13:48:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:41.922 13:48:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.922 13:48:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.922 13:48:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:41.922 13:48:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:41.922 13:48:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.922 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.922 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.179 nvme0n1 00:19:42.179 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.179 13:48:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.179 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.179 13:48:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.179 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.179 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.437 13:48:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.437 13:48:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.437 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.437 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.437 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.437 13:48:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.437 13:48:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.437 13:48:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:42.437 13:48:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.437 13:48:45 -- host/auth.sh@44 -- # digest=sha384 00:19:42.437 13:48:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.437 13:48:45 -- host/auth.sh@44 -- # keyid=0 00:19:42.437 13:48:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:42.437 13:48:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:42.437 13:48:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:42.437 13:48:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:42.437 13:48:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:42.437 13:48:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.437 13:48:45 -- host/auth.sh@68 -- # digest=sha384 00:19:42.437 13:48:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:42.437 13:48:45 -- host/auth.sh@68 -- # keyid=0 00:19:42.437 13:48:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.437 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.437 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.437 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.437 13:48:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.437 13:48:45 -- nvmf/common.sh@717 -- # local ip 00:19:42.437 13:48:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.437 13:48:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.437 13:48:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.437 13:48:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.437 13:48:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:42.437 13:48:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:42.437 13:48:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:42.437 13:48:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:42.437 13:48:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:42.437 13:48:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:42.437 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.437 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.002 nvme0n1 00:19:43.002 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.002 13:48:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.002 13:48:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:43.002 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.002 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.002 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.002 13:48:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.002 13:48:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.002 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.002 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.002 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.002 13:48:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:43.002 13:48:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:43.002 13:48:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:43.002 13:48:45 -- host/auth.sh@44 -- # digest=sha384 00:19:43.002 13:48:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.002 13:48:45 -- host/auth.sh@44 -- # keyid=1 00:19:43.002 13:48:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:43.002 13:48:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:43.002 13:48:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:43.002 13:48:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:43.002 13:48:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:43.002 13:48:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:43.002 13:48:45 -- host/auth.sh@68 -- # digest=sha384 00:19:43.002 13:48:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:43.002 13:48:45 -- host/auth.sh@68 -- # keyid=1 00:19:43.002 13:48:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.002 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.002 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.002 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.002 13:48:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:43.002 13:48:45 -- nvmf/common.sh@717 -- # local ip 00:19:43.002 13:48:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:43.002 13:48:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:43.002 13:48:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.002 13:48:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.002 13:48:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:43.002 13:48:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.002 13:48:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.002 13:48:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:43.002 13:48:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:43.002 13:48:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:43.002 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.002 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.567 nvme0n1 00:19:43.567 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.567 13:48:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.567 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.567 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.567 13:48:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:43.825 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.825 13:48:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.825 13:48:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.825 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.825 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.825 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.825 13:48:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:43.825 13:48:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:43.825 13:48:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:43.825 13:48:46 -- host/auth.sh@44 -- # digest=sha384 00:19:43.825 13:48:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.825 13:48:46 -- host/auth.sh@44 -- # keyid=2 00:19:43.825 13:48:46 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:43.825 13:48:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:43.825 13:48:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:43.825 13:48:46 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:43.825 13:48:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:43.825 13:48:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:43.825 13:48:46 -- host/auth.sh@68 -- # digest=sha384 00:19:43.825 13:48:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:43.825 13:48:46 -- host/auth.sh@68 -- # keyid=2 00:19:43.825 13:48:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.825 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.825 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.825 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.825 13:48:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:43.825 13:48:46 -- nvmf/common.sh@717 -- # local ip 00:19:43.825 13:48:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:43.825 13:48:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:43.825 13:48:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.825 13:48:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.825 13:48:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:43.825 13:48:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.825 13:48:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.825 13:48:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:43.825 13:48:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:43.825 13:48:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:43.825 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.825 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 nvme0n1 00:19:44.390 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.390 13:48:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.390 13:48:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:44.390 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.390 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.390 13:48:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.390 13:48:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.390 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.390 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:44.648 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.648 13:48:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:44.648 13:48:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:44.648 13:48:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:44.648 13:48:47 -- host/auth.sh@44 -- # digest=sha384 00:19:44.648 13:48:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.648 13:48:47 -- host/auth.sh@44 -- # keyid=3 00:19:44.648 13:48:47 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:44.648 13:48:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:44.648 13:48:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:44.648 13:48:47 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:44.648 13:48:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:44.648 13:48:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:44.648 13:48:47 -- host/auth.sh@68 -- # digest=sha384 00:19:44.648 13:48:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:44.648 13:48:47 -- host/auth.sh@68 -- # keyid=3 00:19:44.648 13:48:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:44.648 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.648 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:44.648 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.648 13:48:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:44.648 13:48:47 -- nvmf/common.sh@717 -- # local ip 00:19:44.648 13:48:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:44.648 13:48:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:44.648 13:48:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.648 13:48:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.648 13:48:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:44.648 13:48:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:44.648 13:48:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:44.648 13:48:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:44.648 13:48:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:44.648 13:48:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:44.648 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.648 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.213 nvme0n1 00:19:45.213 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.213 13:48:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.213 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.213 13:48:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:45.213 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.213 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.213 13:48:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.213 13:48:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.213 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.213 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.213 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.213 13:48:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:45.213 13:48:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:45.213 13:48:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:45.213 13:48:48 -- host/auth.sh@44 -- # digest=sha384 00:19:45.213 13:48:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:45.213 13:48:48 -- host/auth.sh@44 -- # keyid=4 00:19:45.213 13:48:48 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:45.213 13:48:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:45.213 13:48:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:45.213 13:48:48 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:45.213 13:48:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:45.213 13:48:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:45.213 13:48:48 -- host/auth.sh@68 -- # digest=sha384 00:19:45.213 13:48:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:45.213 13:48:48 -- host/auth.sh@68 -- # keyid=4 00:19:45.213 13:48:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.213 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.213 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:45.213 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.213 13:48:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:45.213 13:48:48 -- nvmf/common.sh@717 -- # local ip 00:19:45.213 13:48:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:45.213 13:48:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:45.470 13:48:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.470 13:48:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.470 13:48:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:45.470 13:48:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:45.470 13:48:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:45.470 13:48:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:45.470 13:48:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:45.470 13:48:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:45.470 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.470 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.035 nvme0n1 00:19:46.035 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.035 13:48:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.035 13:48:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:46.035 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.035 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.035 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.035 13:48:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.035 13:48:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.035 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.035 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.035 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.035 13:48:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.035 13:48:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:46.035 13:48:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:46.035 13:48:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:46.035 13:48:48 -- host/auth.sh@44 -- # digest=sha384 00:19:46.035 13:48:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.035 13:48:48 -- host/auth.sh@44 -- # keyid=0 00:19:46.035 13:48:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:46.035 13:48:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:46.035 13:48:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:46.035 13:48:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:46.035 13:48:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:46.035 13:48:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:46.035 13:48:48 -- host/auth.sh@68 -- # digest=sha384 00:19:46.035 13:48:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:46.035 13:48:48 -- host/auth.sh@68 -- # keyid=0 00:19:46.035 13:48:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.035 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.035 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.035 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.035 13:48:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:46.035 13:48:48 -- nvmf/common.sh@717 -- # local ip 00:19:46.035 13:48:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.035 13:48:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.035 13:48:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.035 13:48:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.035 13:48:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:46.035 13:48:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:46.035 13:48:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:46.035 13:48:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:46.035 13:48:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:46.035 13:48:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:46.035 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.035 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.407 nvme0n1 00:19:47.407 13:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.407 13:48:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.407 13:48:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:47.407 13:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.407 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.407 13:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.407 13:48:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.407 13:48:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.407 13:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.407 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.407 13:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.407 13:48:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:47.407 13:48:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:47.407 13:48:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:47.407 13:48:49 -- host/auth.sh@44 -- # digest=sha384 00:19:47.407 13:48:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.407 13:48:49 -- host/auth.sh@44 -- # keyid=1 00:19:47.407 13:48:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:47.407 13:48:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:47.407 13:48:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:47.407 13:48:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:47.407 13:48:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:47.407 13:48:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:47.407 13:48:49 -- host/auth.sh@68 -- # digest=sha384 00:19:47.407 13:48:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:47.407 13:48:49 -- host/auth.sh@68 -- # keyid=1 00:19:47.407 13:48:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.407 13:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.407 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.407 13:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.407 13:48:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:47.407 13:48:49 -- nvmf/common.sh@717 -- # local ip 00:19:47.407 13:48:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:47.407 13:48:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:47.407 13:48:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.407 13:48:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.407 13:48:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:47.407 13:48:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:47.407 13:48:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:47.407 13:48:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:47.407 13:48:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:47.407 13:48:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:47.407 13:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.407 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:19:48.339 nvme0n1 00:19:48.339 13:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.339 13:48:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.339 13:48:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:48.339 13:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.339 13:48:51 -- common/autotest_common.sh@10 -- # set +x 00:19:48.339 13:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.339 13:48:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.339 13:48:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.339 13:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.339 13:48:51 -- common/autotest_common.sh@10 -- # set +x 00:19:48.597 13:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.597 13:48:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:48.597 13:48:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:48.597 13:48:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:48.597 13:48:51 -- host/auth.sh@44 -- # digest=sha384 00:19:48.597 13:48:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:48.597 13:48:51 -- host/auth.sh@44 -- # keyid=2 00:19:48.597 13:48:51 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:48.597 13:48:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:48.597 13:48:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:48.597 13:48:51 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:48.597 13:48:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:48.597 13:48:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:48.597 13:48:51 -- host/auth.sh@68 -- # digest=sha384 00:19:48.597 13:48:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:48.597 13:48:51 -- host/auth.sh@68 -- # keyid=2 00:19:48.597 13:48:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.597 13:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.597 13:48:51 -- common/autotest_common.sh@10 -- # set +x 00:19:48.597 13:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.597 13:48:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:48.597 13:48:51 -- nvmf/common.sh@717 -- # local ip 00:19:48.597 13:48:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:48.597 13:48:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:48.597 13:48:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.597 13:48:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.597 13:48:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:48.597 13:48:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:48.597 13:48:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:48.597 13:48:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:48.597 13:48:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:48.597 13:48:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:48.597 13:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.597 13:48:51 -- common/autotest_common.sh@10 -- # set +x 00:19:49.977 nvme0n1 00:19:49.977 13:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.977 13:48:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.977 13:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.977 13:48:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:49.977 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.977 13:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.977 13:48:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.977 13:48:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.977 13:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.977 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.977 13:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.977 13:48:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:49.977 13:48:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:49.977 13:48:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:49.977 13:48:52 -- host/auth.sh@44 -- # digest=sha384 00:19:49.977 13:48:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.977 13:48:52 -- host/auth.sh@44 -- # keyid=3 00:19:49.977 13:48:52 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:49.977 13:48:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:49.977 13:48:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:49.978 13:48:52 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:49.978 13:48:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:49.978 13:48:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:49.978 13:48:52 -- host/auth.sh@68 -- # digest=sha384 00:19:49.978 13:48:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:49.978 13:48:52 -- host/auth.sh@68 -- # keyid=3 00:19:49.978 13:48:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.978 13:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.978 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.978 13:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.978 13:48:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:49.978 13:48:52 -- nvmf/common.sh@717 -- # local ip 00:19:49.978 13:48:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:49.978 13:48:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:49.978 13:48:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.978 13:48:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.978 13:48:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:49.978 13:48:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:49.978 13:48:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:49.978 13:48:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:49.978 13:48:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:49.978 13:48:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:49.978 13:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.978 13:48:52 -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 nvme0n1 00:19:50.936 13:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.936 13:48:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.936 13:48:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:50.936 13:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.936 13:48:53 -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 13:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.936 13:48:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.936 13:48:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.936 13:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.936 13:48:53 -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 13:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.936 13:48:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:50.936 13:48:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:50.936 13:48:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:50.936 13:48:53 -- host/auth.sh@44 -- # digest=sha384 00:19:50.936 13:48:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.936 13:48:53 -- host/auth.sh@44 -- # keyid=4 00:19:50.936 13:48:53 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:50.936 13:48:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:50.936 13:48:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:50.936 13:48:53 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:50.936 13:48:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:50.936 13:48:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:50.936 13:48:53 -- host/auth.sh@68 -- # digest=sha384 00:19:50.936 13:48:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:50.936 13:48:53 -- host/auth.sh@68 -- # keyid=4 00:19:50.936 13:48:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.936 13:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.936 13:48:53 -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 13:48:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.936 13:48:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:50.936 13:48:53 -- nvmf/common.sh@717 -- # local ip 00:19:50.936 13:48:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:50.936 13:48:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:50.936 13:48:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.936 13:48:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.936 13:48:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:50.936 13:48:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:50.936 13:48:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:50.936 13:48:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:50.936 13:48:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:50.936 13:48:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.936 13:48:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.936 13:48:53 -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 nvme0n1 00:19:52.310 13:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.310 13:48:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.310 13:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.310 13:48:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 13:48:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:52.310 13:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.310 13:48:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.310 13:48:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.310 13:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.310 13:48:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 13:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.310 13:48:54 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:52.310 13:48:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.310 13:48:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:52.310 13:48:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:52.310 13:48:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:52.310 13:48:54 -- host/auth.sh@44 -- # digest=sha512 00:19:52.310 13:48:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.310 13:48:54 -- host/auth.sh@44 -- # keyid=0 00:19:52.310 13:48:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:52.310 13:48:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:52.310 13:48:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:52.310 13:48:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:52.310 13:48:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:52.310 13:48:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:52.310 13:48:54 -- host/auth.sh@68 -- # digest=sha512 00:19:52.310 13:48:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:52.310 13:48:54 -- host/auth.sh@68 -- # keyid=0 00:19:52.310 13:48:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.310 13:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.310 13:48:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 13:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.310 13:48:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:52.310 13:48:54 -- nvmf/common.sh@717 -- # local ip 00:19:52.310 13:48:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.310 13:48:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.310 13:48:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.310 13:48:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.310 13:48:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:52.310 13:48:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.310 13:48:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.310 13:48:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:52.310 13:48:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:52.310 13:48:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:52.310 13:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.310 13:48:54 -- common/autotest_common.sh@10 -- # set +x 00:19:52.568 nvme0n1 00:19:52.568 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.568 13:48:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.568 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.568 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.568 13:48:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:52.568 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.568 13:48:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.568 13:48:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.568 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.568 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.568 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.568 13:48:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:52.568 13:48:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:52.568 13:48:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:52.568 13:48:55 -- host/auth.sh@44 -- # digest=sha512 00:19:52.568 13:48:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.568 13:48:55 -- host/auth.sh@44 -- # keyid=1 00:19:52.568 13:48:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:52.568 13:48:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:52.568 13:48:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:52.568 13:48:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:52.568 13:48:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:52.568 13:48:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:52.568 13:48:55 -- host/auth.sh@68 -- # digest=sha512 00:19:52.568 13:48:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:52.568 13:48:55 -- host/auth.sh@68 -- # keyid=1 00:19:52.568 13:48:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.568 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.568 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.568 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.568 13:48:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:52.568 13:48:55 -- nvmf/common.sh@717 -- # local ip 00:19:52.568 13:48:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.568 13:48:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.568 13:48:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.568 13:48:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.568 13:48:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:52.568 13:48:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.568 13:48:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.568 13:48:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:52.568 13:48:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:52.568 13:48:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:52.568 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.568 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 nvme0n1 00:19:52.826 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.826 13:48:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.826 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.826 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 13:48:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:52.826 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.826 13:48:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.826 13:48:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.826 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.826 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.826 13:48:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:52.826 13:48:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:52.826 13:48:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:52.826 13:48:55 -- host/auth.sh@44 -- # digest=sha512 00:19:52.826 13:48:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.826 13:48:55 -- host/auth.sh@44 -- # keyid=2 00:19:52.826 13:48:55 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:52.826 13:48:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:52.826 13:48:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:52.826 13:48:55 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:52.826 13:48:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:52.826 13:48:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:52.826 13:48:55 -- host/auth.sh@68 -- # digest=sha512 00:19:52.826 13:48:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:52.826 13:48:55 -- host/auth.sh@68 -- # keyid=2 00:19:52.826 13:48:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.826 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.826 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.826 13:48:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:52.826 13:48:55 -- nvmf/common.sh@717 -- # local ip 00:19:52.826 13:48:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.826 13:48:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.826 13:48:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.826 13:48:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.826 13:48:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:52.826 13:48:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.826 13:48:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.826 13:48:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:52.826 13:48:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:52.826 13:48:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:52.826 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.826 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 nvme0n1 00:19:53.083 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.083 13:48:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.083 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.083 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 13:48:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.083 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.083 13:48:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.083 13:48:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.083 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.083 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.083 13:48:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:53.083 13:48:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:53.083 13:48:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:53.083 13:48:55 -- host/auth.sh@44 -- # digest=sha512 00:19:53.083 13:48:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.083 13:48:55 -- host/auth.sh@44 -- # keyid=3 00:19:53.083 13:48:55 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:53.083 13:48:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:53.083 13:48:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:53.083 13:48:55 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:53.083 13:48:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:53.083 13:48:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:53.083 13:48:55 -- host/auth.sh@68 -- # digest=sha512 00:19:53.083 13:48:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:53.083 13:48:55 -- host/auth.sh@68 -- # keyid=3 00:19:53.083 13:48:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.083 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.083 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 13:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.083 13:48:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:53.083 13:48:55 -- nvmf/common.sh@717 -- # local ip 00:19:53.083 13:48:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.083 13:48:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.083 13:48:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.083 13:48:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.083 13:48:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:53.083 13:48:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.083 13:48:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.083 13:48:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:53.083 13:48:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:53.083 13:48:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:53.083 13:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.083 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 nvme0n1 00:19:53.339 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.339 13:48:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.339 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.339 13:48:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.339 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.339 13:48:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.339 13:48:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.339 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.339 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.339 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.339 13:48:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:53.339 13:48:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:53.339 13:48:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:53.339 13:48:56 -- host/auth.sh@44 -- # digest=sha512 00:19:53.339 13:48:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.339 13:48:56 -- host/auth.sh@44 -- # keyid=4 00:19:53.339 13:48:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:53.339 13:48:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:53.339 13:48:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:53.339 13:48:56 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:53.339 13:48:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:53.340 13:48:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:53.340 13:48:56 -- host/auth.sh@68 -- # digest=sha512 00:19:53.340 13:48:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:53.340 13:48:56 -- host/auth.sh@68 -- # keyid=4 00:19:53.340 13:48:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.340 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.340 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.340 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.340 13:48:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:53.340 13:48:56 -- nvmf/common.sh@717 -- # local ip 00:19:53.340 13:48:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.340 13:48:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.340 13:48:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.340 13:48:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.340 13:48:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:53.340 13:48:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.340 13:48:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.340 13:48:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:53.340 13:48:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:53.340 13:48:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.340 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.340 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 nvme0n1 00:19:53.596 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.596 13:48:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.596 13:48:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.596 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.596 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.596 13:48:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.596 13:48:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.596 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.596 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.596 13:48:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.596 13:48:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:53.596 13:48:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:53.596 13:48:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:53.596 13:48:56 -- host/auth.sh@44 -- # digest=sha512 00:19:53.596 13:48:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.596 13:48:56 -- host/auth.sh@44 -- # keyid=0 00:19:53.596 13:48:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:53.596 13:48:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:53.596 13:48:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:53.596 13:48:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:53.596 13:48:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:53.596 13:48:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:53.596 13:48:56 -- host/auth.sh@68 -- # digest=sha512 00:19:53.596 13:48:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:53.596 13:48:56 -- host/auth.sh@68 -- # keyid=0 00:19:53.596 13:48:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:53.596 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.596 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.596 13:48:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:53.596 13:48:56 -- nvmf/common.sh@717 -- # local ip 00:19:53.596 13:48:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.596 13:48:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.596 13:48:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.596 13:48:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.596 13:48:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:53.596 13:48:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.596 13:48:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.597 13:48:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:53.597 13:48:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:53.597 13:48:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:53.597 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.597 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.854 nvme0n1 00:19:53.854 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.854 13:48:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.854 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.854 13:48:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:53.854 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.854 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.111 13:48:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.111 13:48:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.111 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.111 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.111 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.112 13:48:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:54.112 13:48:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:54.112 13:48:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:54.112 13:48:56 -- host/auth.sh@44 -- # digest=sha512 00:19:54.112 13:48:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.112 13:48:56 -- host/auth.sh@44 -- # keyid=1 00:19:54.112 13:48:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:54.112 13:48:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:54.112 13:48:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:54.112 13:48:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:54.112 13:48:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:54.112 13:48:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:54.112 13:48:56 -- host/auth.sh@68 -- # digest=sha512 00:19:54.112 13:48:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:54.112 13:48:56 -- host/auth.sh@68 -- # keyid=1 00:19:54.112 13:48:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.112 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.112 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.112 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.112 13:48:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:54.112 13:48:56 -- nvmf/common.sh@717 -- # local ip 00:19:54.112 13:48:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:54.112 13:48:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:54.112 13:48:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.112 13:48:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.112 13:48:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:54.112 13:48:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.112 13:48:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.112 13:48:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:54.112 13:48:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:54.112 13:48:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:54.112 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.112 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 nvme0n1 00:19:54.369 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.369 13:48:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.369 13:48:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.369 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 13:48:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:54.369 13:48:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.369 13:48:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.369 13:48:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.369 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.369 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.369 13:48:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:54.369 13:48:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:54.369 13:48:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:54.369 13:48:57 -- host/auth.sh@44 -- # digest=sha512 00:19:54.369 13:48:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.369 13:48:57 -- host/auth.sh@44 -- # keyid=2 00:19:54.369 13:48:57 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:54.369 13:48:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:54.369 13:48:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:54.369 13:48:57 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:54.369 13:48:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:54.369 13:48:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:54.369 13:48:57 -- host/auth.sh@68 -- # digest=sha512 00:19:54.369 13:48:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:54.369 13:48:57 -- host/auth.sh@68 -- # keyid=2 00:19:54.369 13:48:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.369 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.369 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.369 13:48:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:54.369 13:48:57 -- nvmf/common.sh@717 -- # local ip 00:19:54.369 13:48:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:54.369 13:48:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:54.369 13:48:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.369 13:48:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.369 13:48:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:54.369 13:48:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.369 13:48:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.369 13:48:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:54.369 13:48:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:54.369 13:48:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:54.369 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.369 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.626 nvme0n1 00:19:54.626 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.626 13:48:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.626 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.626 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.626 13:48:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:54.626 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.626 13:48:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.626 13:48:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.626 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.626 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.626 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.626 13:48:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:54.626 13:48:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:54.626 13:48:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:54.626 13:48:57 -- host/auth.sh@44 -- # digest=sha512 00:19:54.626 13:48:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.626 13:48:57 -- host/auth.sh@44 -- # keyid=3 00:19:54.626 13:48:57 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:54.626 13:48:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:54.626 13:48:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:54.627 13:48:57 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:54.627 13:48:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:54.627 13:48:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:54.627 13:48:57 -- host/auth.sh@68 -- # digest=sha512 00:19:54.627 13:48:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:54.627 13:48:57 -- host/auth.sh@68 -- # keyid=3 00:19:54.627 13:48:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.627 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.627 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.627 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.627 13:48:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:54.627 13:48:57 -- nvmf/common.sh@717 -- # local ip 00:19:54.627 13:48:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:54.627 13:48:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:54.627 13:48:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.627 13:48:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.627 13:48:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:54.627 13:48:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.627 13:48:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.627 13:48:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:54.627 13:48:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:54.627 13:48:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:54.627 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.627 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.884 nvme0n1 00:19:54.884 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.884 13:48:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.884 13:48:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:54.884 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.884 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.884 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.884 13:48:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.884 13:48:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.884 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.884 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.141 13:48:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.141 13:48:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:55.142 13:48:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.142 13:48:57 -- host/auth.sh@44 -- # digest=sha512 00:19:55.142 13:48:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.142 13:48:57 -- host/auth.sh@44 -- # keyid=4 00:19:55.142 13:48:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:55.142 13:48:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:55.142 13:48:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:55.142 13:48:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:55.142 13:48:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:55.142 13:48:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.142 13:48:57 -- host/auth.sh@68 -- # digest=sha512 00:19:55.142 13:48:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:55.142 13:48:57 -- host/auth.sh@68 -- # keyid=4 00:19:55.142 13:48:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.142 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.142 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.142 13:48:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.142 13:48:57 -- nvmf/common.sh@717 -- # local ip 00:19:55.142 13:48:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.142 13:48:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.142 13:48:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.142 13:48:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.142 13:48:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:55.142 13:48:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.142 13:48:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.142 13:48:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:55.142 13:48:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:55.142 13:48:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.142 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.142 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 nvme0n1 00:19:55.399 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.399 13:48:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.399 13:48:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.399 13:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.399 13:48:57 -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 13:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.399 13:48:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.399 13:48:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.399 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.399 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.399 13:48:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.399 13:48:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.399 13:48:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:55.399 13:48:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.399 13:48:58 -- host/auth.sh@44 -- # digest=sha512 00:19:55.399 13:48:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.399 13:48:58 -- host/auth.sh@44 -- # keyid=0 00:19:55.399 13:48:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:55.399 13:48:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:55.399 13:48:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:55.399 13:48:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:55.399 13:48:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:55.399 13:48:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.399 13:48:58 -- host/auth.sh@68 -- # digest=sha512 00:19:55.399 13:48:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:55.399 13:48:58 -- host/auth.sh@68 -- # keyid=0 00:19:55.399 13:48:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.399 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.399 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.399 13:48:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.399 13:48:58 -- nvmf/common.sh@717 -- # local ip 00:19:55.399 13:48:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.399 13:48:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.399 13:48:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.399 13:48:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.399 13:48:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:55.399 13:48:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.399 13:48:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.399 13:48:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:55.399 13:48:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:55.399 13:48:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:55.399 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.399 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.657 nvme0n1 00:19:55.657 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.657 13:48:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.657 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.657 13:48:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.657 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.657 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.914 13:48:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.914 13:48:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.914 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.914 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.914 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.914 13:48:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.914 13:48:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:55.914 13:48:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.914 13:48:58 -- host/auth.sh@44 -- # digest=sha512 00:19:55.914 13:48:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.914 13:48:58 -- host/auth.sh@44 -- # keyid=1 00:19:55.914 13:48:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:55.914 13:48:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:55.914 13:48:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:55.914 13:48:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:55.914 13:48:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:55.914 13:48:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.914 13:48:58 -- host/auth.sh@68 -- # digest=sha512 00:19:55.914 13:48:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:55.914 13:48:58 -- host/auth.sh@68 -- # keyid=1 00:19:55.914 13:48:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.914 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.914 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.915 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.915 13:48:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.915 13:48:58 -- nvmf/common.sh@717 -- # local ip 00:19:55.915 13:48:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.915 13:48:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.915 13:48:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.915 13:48:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.915 13:48:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:55.915 13:48:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.915 13:48:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.915 13:48:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:55.915 13:48:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:55.915 13:48:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:55.915 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.915 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.171 nvme0n1 00:19:56.171 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.171 13:48:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.171 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.171 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.171 13:48:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.171 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.429 13:48:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.429 13:48:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.429 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.429 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.429 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.429 13:48:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.429 13:48:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:56.429 13:48:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.429 13:48:59 -- host/auth.sh@44 -- # digest=sha512 00:19:56.429 13:48:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.429 13:48:59 -- host/auth.sh@44 -- # keyid=2 00:19:56.429 13:48:59 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:56.429 13:48:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:56.429 13:48:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:56.429 13:48:59 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:56.429 13:48:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:56.429 13:48:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.429 13:48:59 -- host/auth.sh@68 -- # digest=sha512 00:19:56.429 13:48:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:56.429 13:48:59 -- host/auth.sh@68 -- # keyid=2 00:19:56.429 13:48:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:56.429 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.429 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.429 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.429 13:48:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.429 13:48:59 -- nvmf/common.sh@717 -- # local ip 00:19:56.429 13:48:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.429 13:48:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.429 13:48:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.429 13:48:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.429 13:48:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:56.429 13:48:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.429 13:48:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.429 13:48:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:56.429 13:48:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:56.429 13:48:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:56.429 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.429 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 nvme0n1 00:19:56.687 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.687 13:48:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.687 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.687 13:48:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.687 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.687 13:48:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.687 13:48:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.687 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.687 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.687 13:48:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.687 13:48:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:56.687 13:48:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.687 13:48:59 -- host/auth.sh@44 -- # digest=sha512 00:19:56.687 13:48:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.687 13:48:59 -- host/auth.sh@44 -- # keyid=3 00:19:56.687 13:48:59 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:56.687 13:48:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:56.687 13:48:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:56.687 13:48:59 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:19:56.687 13:48:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:56.687 13:48:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.687 13:48:59 -- host/auth.sh@68 -- # digest=sha512 00:19:56.687 13:48:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:56.687 13:48:59 -- host/auth.sh@68 -- # keyid=3 00:19:56.687 13:48:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:56.687 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.687 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.687 13:48:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.687 13:48:59 -- nvmf/common.sh@717 -- # local ip 00:19:56.687 13:48:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.687 13:48:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.687 13:48:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.687 13:48:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.687 13:48:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:56.687 13:48:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.687 13:48:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.687 13:48:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:56.687 13:48:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:56.687 13:48:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:56.687 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.687 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.252 nvme0n1 00:19:57.252 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.252 13:48:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.252 13:48:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.252 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.252 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.252 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.252 13:48:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.252 13:48:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.252 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.252 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.253 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.253 13:48:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.253 13:48:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:57.253 13:48:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.253 13:48:59 -- host/auth.sh@44 -- # digest=sha512 00:19:57.253 13:48:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.253 13:48:59 -- host/auth.sh@44 -- # keyid=4 00:19:57.253 13:48:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:57.253 13:48:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:57.253 13:48:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:57.253 13:48:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:19:57.253 13:48:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:57.253 13:48:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.253 13:48:59 -- host/auth.sh@68 -- # digest=sha512 00:19:57.253 13:48:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:57.253 13:48:59 -- host/auth.sh@68 -- # keyid=4 00:19:57.253 13:48:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:57.253 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.253 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.253 13:48:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.253 13:48:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.253 13:48:59 -- nvmf/common.sh@717 -- # local ip 00:19:57.253 13:48:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.253 13:48:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.253 13:48:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.253 13:48:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.253 13:48:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:57.253 13:48:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.253 13:48:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.253 13:48:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:57.253 13:48:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:57.253 13:48:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.253 13:48:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.253 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.510 nvme0n1 00:19:57.510 13:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.510 13:49:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.510 13:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.510 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.510 13:49:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.510 13:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.510 13:49:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.510 13:49:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.510 13:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.510 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.767 13:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.767 13:49:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.767 13:49:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.767 13:49:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:57.767 13:49:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.767 13:49:00 -- host/auth.sh@44 -- # digest=sha512 00:19:57.767 13:49:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.767 13:49:00 -- host/auth.sh@44 -- # keyid=0 00:19:57.767 13:49:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:57.767 13:49:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:57.767 13:49:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:57.767 13:49:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:19:57.767 13:49:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:57.767 13:49:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.767 13:49:00 -- host/auth.sh@68 -- # digest=sha512 00:19:57.767 13:49:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:57.767 13:49:00 -- host/auth.sh@68 -- # keyid=0 00:19:57.767 13:49:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.767 13:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.767 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.767 13:49:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.767 13:49:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.767 13:49:00 -- nvmf/common.sh@717 -- # local ip 00:19:57.767 13:49:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.767 13:49:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.767 13:49:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.767 13:49:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.767 13:49:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:57.767 13:49:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.768 13:49:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.768 13:49:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:57.768 13:49:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:57.768 13:49:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:57.768 13:49:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.768 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.332 nvme0n1 00:19:58.332 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.332 13:49:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.332 13:49:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.332 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.332 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.332 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.332 13:49:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.332 13:49:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.332 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.332 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.332 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.332 13:49:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.332 13:49:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:58.332 13:49:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.332 13:49:01 -- host/auth.sh@44 -- # digest=sha512 00:19:58.332 13:49:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.332 13:49:01 -- host/auth.sh@44 -- # keyid=1 00:19:58.332 13:49:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:58.332 13:49:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:58.332 13:49:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:58.332 13:49:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:19:58.332 13:49:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:58.332 13:49:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.332 13:49:01 -- host/auth.sh@68 -- # digest=sha512 00:19:58.332 13:49:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:58.332 13:49:01 -- host/auth.sh@68 -- # keyid=1 00:19:58.332 13:49:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.332 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.332 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.332 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.332 13:49:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.332 13:49:01 -- nvmf/common.sh@717 -- # local ip 00:19:58.332 13:49:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.332 13:49:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.332 13:49:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.332 13:49:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.332 13:49:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:58.332 13:49:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:58.332 13:49:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:58.332 13:49:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:58.332 13:49:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:58.332 13:49:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:58.332 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.332 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.260 nvme0n1 00:19:59.260 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.260 13:49:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.260 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.260 13:49:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.260 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.260 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.260 13:49:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.260 13:49:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.260 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.260 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.260 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.260 13:49:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.260 13:49:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:59.260 13:49:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.260 13:49:01 -- host/auth.sh@44 -- # digest=sha512 00:19:59.260 13:49:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.260 13:49:01 -- host/auth.sh@44 -- # keyid=2 00:19:59.260 13:49:01 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:59.260 13:49:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:59.260 13:49:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:59.260 13:49:01 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:19:59.260 13:49:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:59.260 13:49:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.260 13:49:01 -- host/auth.sh@68 -- # digest=sha512 00:19:59.260 13:49:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:59.260 13:49:01 -- host/auth.sh@68 -- # keyid=2 00:19:59.260 13:49:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:59.260 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.260 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.260 13:49:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.260 13:49:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.260 13:49:01 -- nvmf/common.sh@717 -- # local ip 00:19:59.260 13:49:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.260 13:49:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.260 13:49:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.260 13:49:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.260 13:49:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:59.260 13:49:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:59.260 13:49:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:59.260 13:49:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:59.260 13:49:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:59.260 13:49:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:59.260 13:49:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.260 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:19:59.822 nvme0n1 00:19:59.822 13:49:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.822 13:49:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.822 13:49:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.822 13:49:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.822 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.822 13:49:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.079 13:49:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.079 13:49:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.079 13:49:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.079 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.079 13:49:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.079 13:49:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.079 13:49:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:00.079 13:49:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.079 13:49:02 -- host/auth.sh@44 -- # digest=sha512 00:20:00.079 13:49:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.079 13:49:02 -- host/auth.sh@44 -- # keyid=3 00:20:00.079 13:49:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:20:00.079 13:49:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.079 13:49:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:00.079 13:49:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:20:00.079 13:49:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:00.079 13:49:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.079 13:49:02 -- host/auth.sh@68 -- # digest=sha512 00:20:00.079 13:49:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:00.079 13:49:02 -- host/auth.sh@68 -- # keyid=3 00:20:00.079 13:49:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.079 13:49:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.079 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.079 13:49:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.079 13:49:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.079 13:49:02 -- nvmf/common.sh@717 -- # local ip 00:20:00.079 13:49:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.079 13:49:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.079 13:49:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.079 13:49:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.079 13:49:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:00.079 13:49:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.079 13:49:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.079 13:49:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:00.079 13:49:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:00.079 13:49:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:00.079 13:49:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.079 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.642 nvme0n1 00:20:00.642 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.642 13:49:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.642 13:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.642 13:49:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.642 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:20:00.642 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.642 13:49:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.642 13:49:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.642 13:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.642 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:20:00.642 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.642 13:49:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.642 13:49:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:00.642 13:49:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.642 13:49:03 -- host/auth.sh@44 -- # digest=sha512 00:20:00.642 13:49:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.642 13:49:03 -- host/auth.sh@44 -- # keyid=4 00:20:00.642 13:49:03 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:20:00.642 13:49:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:00.642 13:49:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:00.643 13:49:03 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:20:00.643 13:49:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:00.643 13:49:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.643 13:49:03 -- host/auth.sh@68 -- # digest=sha512 00:20:00.643 13:49:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:00.643 13:49:03 -- host/auth.sh@68 -- # keyid=4 00:20:00.643 13:49:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.643 13:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.643 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:20:00.643 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.643 13:49:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.643 13:49:03 -- nvmf/common.sh@717 -- # local ip 00:20:00.643 13:49:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.643 13:49:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.643 13:49:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.643 13:49:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.643 13:49:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:00.643 13:49:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.643 13:49:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.643 13:49:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:00.643 13:49:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:00.643 13:49:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.643 13:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.643 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.207 nvme0n1 00:20:01.207 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.207 13:49:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.207 13:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.207 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.207 13:49:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.207 13:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.464 13:49:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.464 13:49:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.464 13:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.464 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 13:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.464 13:49:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.464 13:49:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.464 13:49:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:01.464 13:49:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.464 13:49:04 -- host/auth.sh@44 -- # digest=sha512 00:20:01.464 13:49:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.464 13:49:04 -- host/auth.sh@44 -- # keyid=0 00:20:01.464 13:49:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:20:01.464 13:49:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:01.464 13:49:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:01.464 13:49:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMTQ4OTg2OTgxMmU4MmU0Zjk5ZmQ5ZTBiYjZlYzj9AFkV: 00:20:01.464 13:49:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:01.464 13:49:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.464 13:49:04 -- host/auth.sh@68 -- # digest=sha512 00:20:01.464 13:49:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:01.464 13:49:04 -- host/auth.sh@68 -- # keyid=0 00:20:01.464 13:49:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.464 13:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.464 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 13:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.464 13:49:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.464 13:49:04 -- nvmf/common.sh@717 -- # local ip 00:20:01.464 13:49:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.464 13:49:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.464 13:49:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.464 13:49:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.464 13:49:04 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:01.464 13:49:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:01.465 13:49:04 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:01.465 13:49:04 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:01.465 13:49:04 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:01.465 13:49:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:01.465 13:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.465 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.413 nvme0n1 00:20:02.413 13:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.413 13:49:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.413 13:49:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.413 13:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.413 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.413 13:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.413 13:49:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.413 13:49:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.413 13:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.413 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.683 13:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.683 13:49:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.683 13:49:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:02.683 13:49:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.683 13:49:05 -- host/auth.sh@44 -- # digest=sha512 00:20:02.683 13:49:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.683 13:49:05 -- host/auth.sh@44 -- # keyid=1 00:20:02.683 13:49:05 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:20:02.683 13:49:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:02.683 13:49:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:02.683 13:49:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:20:02.683 13:49:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:02.683 13:49:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.683 13:49:05 -- host/auth.sh@68 -- # digest=sha512 00:20:02.683 13:49:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:02.683 13:49:05 -- host/auth.sh@68 -- # keyid=1 00:20:02.683 13:49:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.683 13:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.683 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.683 13:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.683 13:49:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.683 13:49:05 -- nvmf/common.sh@717 -- # local ip 00:20:02.683 13:49:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.683 13:49:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.683 13:49:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.683 13:49:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.683 13:49:05 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:02.683 13:49:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:02.683 13:49:05 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:02.683 13:49:05 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:02.683 13:49:05 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:02.683 13:49:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:02.683 13:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.683 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:20:04.051 nvme0n1 00:20:04.051 13:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.051 13:49:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.051 13:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.051 13:49:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.051 13:49:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:04.051 13:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.051 13:49:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.051 13:49:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.051 13:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.051 13:49:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.051 13:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.051 13:49:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:04.051 13:49:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:04.051 13:49:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:04.051 13:49:06 -- host/auth.sh@44 -- # digest=sha512 00:20:04.051 13:49:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.051 13:49:06 -- host/auth.sh@44 -- # keyid=2 00:20:04.051 13:49:06 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:20:04.051 13:49:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:04.051 13:49:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:04.051 13:49:06 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJjOTYwZGU2YTFlY2M1MTQ2M2Q0OWRlNjM2N2RmZDXZmG7u: 00:20:04.051 13:49:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:04.051 13:49:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:04.051 13:49:06 -- host/auth.sh@68 -- # digest=sha512 00:20:04.051 13:49:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:04.051 13:49:06 -- host/auth.sh@68 -- # keyid=2 00:20:04.051 13:49:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:04.051 13:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.051 13:49:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.051 13:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.051 13:49:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:04.051 13:49:06 -- nvmf/common.sh@717 -- # local ip 00:20:04.051 13:49:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:04.051 13:49:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:04.051 13:49:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.051 13:49:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.051 13:49:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:04.051 13:49:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:04.051 13:49:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:04.051 13:49:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:04.051 13:49:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:04.051 13:49:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:04.051 13:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.051 13:49:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.982 nvme0n1 00:20:04.982 13:49:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.982 13:49:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.982 13:49:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:04.982 13:49:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.982 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:20:05.238 13:49:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.238 13:49:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.238 13:49:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.238 13:49:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.238 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:20:05.238 13:49:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.238 13:49:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:05.238 13:49:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:05.238 13:49:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:05.238 13:49:07 -- host/auth.sh@44 -- # digest=sha512 00:20:05.238 13:49:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.238 13:49:07 -- host/auth.sh@44 -- # keyid=3 00:20:05.238 13:49:07 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:20:05.238 13:49:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:05.238 13:49:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:05.239 13:49:07 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTdhNTI1YjEzYmVjZDQ0NTY1NTEyYjZlOGZkM2U5N2QxYjMxNWEwYjQyNjA5YTEwjVrA9Q==: 00:20:05.239 13:49:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:05.239 13:49:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:05.239 13:49:07 -- host/auth.sh@68 -- # digest=sha512 00:20:05.239 13:49:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:05.239 13:49:07 -- host/auth.sh@68 -- # keyid=3 00:20:05.239 13:49:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.239 13:49:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.239 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 13:49:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.239 13:49:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:05.239 13:49:07 -- nvmf/common.sh@717 -- # local ip 00:20:05.239 13:49:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.239 13:49:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.239 13:49:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.239 13:49:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.239 13:49:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:05.239 13:49:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:05.239 13:49:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:05.239 13:49:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:05.239 13:49:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:05.239 13:49:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:05.239 13:49:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.239 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:20:06.611 nvme0n1 00:20:06.611 13:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.611 13:49:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.611 13:49:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.611 13:49:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:06.611 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:20:06.611 13:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.611 13:49:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.611 13:49:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.611 13:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.611 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.611 13:49:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.611 13:49:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:06.611 13:49:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:06.611 13:49:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:06.611 13:49:09 -- host/auth.sh@44 -- # digest=sha512 00:20:06.611 13:49:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.611 13:49:09 -- host/auth.sh@44 -- # keyid=4 00:20:06.611 13:49:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:20:06.611 13:49:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:06.611 13:49:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:06.611 13:49:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NmIzYTMyN2RmZGM1NGUwODkyY2Y5Nzc0YWMxZmI4ODdmNTAyYThkNTJjOWFhNThmNWI3ZGY3MWZiMDU5NGZjNE8nXEs=: 00:20:06.611 13:49:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:06.611 13:49:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:06.611 13:49:09 -- host/auth.sh@68 -- # digest=sha512 00:20:06.611 13:49:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:06.611 13:49:09 -- host/auth.sh@68 -- # keyid=4 00:20:06.611 13:49:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.611 13:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.611 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.611 13:49:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.611 13:49:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:06.611 13:49:09 -- nvmf/common.sh@717 -- # local ip 00:20:06.611 13:49:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:06.611 13:49:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:06.611 13:49:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.611 13:49:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.611 13:49:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:06.611 13:49:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.611 13:49:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.611 13:49:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:06.611 13:49:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:06.611 13:49:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.611 13:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.611 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:20:07.543 nvme0n1 00:20:07.543 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.543 13:49:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.543 13:49:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.543 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.543 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.544 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.544 13:49:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.544 13:49:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.544 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.544 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.801 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.801 13:49:10 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:07.801 13:49:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.801 13:49:10 -- host/auth.sh@44 -- # digest=sha256 00:20:07.801 13:49:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.801 13:49:10 -- host/auth.sh@44 -- # keyid=1 00:20:07.801 13:49:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:20:07.801 13:49:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:07.802 13:49:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:07.802 13:49:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NmY2YmMyNGQ3ZjE5OTFiZmMxOTRkMmM4MDgyOGFjNjNkMzk5NThlOTU4ZDAzNzZjigvXJg==: 00:20:07.802 13:49:10 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.802 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.802 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.802 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.802 13:49:10 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:07.802 13:49:10 -- nvmf/common.sh@717 -- # local ip 00:20:07.802 13:49:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.802 13:49:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.802 13:49:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:07.802 13:49:10 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:07.802 13:49:10 -- common/autotest_common.sh@638 -- # local es=0 00:20:07.802 13:49:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:07.802 13:49:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:07.802 13:49:10 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:07.802 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.802 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.802 request: 00:20:07.802 { 00:20:07.802 "name": "nvme0", 00:20:07.802 "trtype": "rdma", 00:20:07.802 "traddr": "192.168.100.8", 00:20:07.802 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:07.802 "adrfam": "ipv4", 00:20:07.802 "trsvcid": "4420", 00:20:07.802 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:07.802 "method": "bdev_nvme_attach_controller", 00:20:07.802 "req_id": 1 00:20:07.802 } 00:20:07.802 Got JSON-RPC error response 00:20:07.802 response: 00:20:07.802 { 00:20:07.802 "code": -32602, 00:20:07.802 "message": "Invalid parameters" 00:20:07.802 } 00:20:07.802 13:49:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:07.802 13:49:10 -- common/autotest_common.sh@641 -- # es=1 00:20:07.802 13:49:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:07.802 13:49:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:07.802 13:49:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:07.802 13:49:10 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.802 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.802 13:49:10 -- host/auth.sh@121 -- # jq length 00:20:07.802 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.802 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.802 13:49:10 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:07.802 13:49:10 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:07.802 13:49:10 -- nvmf/common.sh@717 -- # local ip 00:20:07.802 13:49:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.802 13:49:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.802 13:49:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:07.802 13:49:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:07.802 13:49:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:07.802 13:49:10 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:07.802 13:49:10 -- common/autotest_common.sh@638 -- # local es=0 00:20:07.802 13:49:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:07.802 13:49:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:07.802 13:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:07.802 13:49:10 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:07.802 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.802 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.802 request: 00:20:07.802 { 00:20:07.802 "name": "nvme0", 00:20:07.802 "trtype": "rdma", 00:20:07.802 "traddr": "192.168.100.8", 00:20:07.802 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:07.802 "adrfam": "ipv4", 00:20:07.802 "trsvcid": "4420", 00:20:07.802 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:07.802 "dhchap_key": "key2", 00:20:07.802 "method": "bdev_nvme_attach_controller", 00:20:07.802 "req_id": 1 00:20:07.802 } 00:20:07.802 Got JSON-RPC error response 00:20:07.802 response: 00:20:07.802 { 00:20:07.802 "code": -32602, 00:20:07.802 "message": "Invalid parameters" 00:20:07.802 } 00:20:07.802 13:49:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:07.802 13:49:10 -- common/autotest_common.sh@641 -- # es=1 00:20:07.802 13:49:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:07.802 13:49:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:07.802 13:49:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:07.802 13:49:10 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.802 13:49:10 -- host/auth.sh@127 -- # jq length 00:20:07.802 13:49:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.802 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:20:07.802 13:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.059 13:49:10 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:08.060 13:49:10 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:08.060 13:49:10 -- host/auth.sh@130 -- # cleanup 00:20:08.060 13:49:10 -- host/auth.sh@24 -- # nvmftestfini 00:20:08.060 13:49:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:08.060 13:49:10 -- nvmf/common.sh@117 -- # sync 00:20:08.060 13:49:10 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:08.060 13:49:10 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:08.060 13:49:10 -- nvmf/common.sh@120 -- # set +e 00:20:08.060 13:49:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.060 13:49:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:08.060 rmmod nvme_rdma 00:20:08.060 rmmod nvme_fabrics 00:20:08.060 13:49:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.060 13:49:10 -- nvmf/common.sh@124 -- # set -e 00:20:08.060 13:49:10 -- nvmf/common.sh@125 -- # return 0 00:20:08.060 13:49:10 -- nvmf/common.sh@478 -- # '[' -n 1193249 ']' 00:20:08.060 13:49:10 -- nvmf/common.sh@479 -- # killprocess 1193249 00:20:08.060 13:49:10 -- common/autotest_common.sh@936 -- # '[' -z 1193249 ']' 00:20:08.060 13:49:10 -- common/autotest_common.sh@940 -- # kill -0 1193249 00:20:08.060 13:49:10 -- common/autotest_common.sh@941 -- # uname 00:20:08.060 13:49:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.060 13:49:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193249 00:20:08.060 13:49:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.060 13:49:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.060 13:49:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193249' 00:20:08.060 killing process with pid 1193249 00:20:08.060 13:49:10 -- common/autotest_common.sh@955 -- # kill 1193249 00:20:08.060 13:49:10 -- common/autotest_common.sh@960 -- # wait 1193249 00:20:08.318 13:49:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:08.318 13:49:10 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:08.318 13:49:10 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:08.318 13:49:10 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:08.318 13:49:10 -- host/auth.sh@27 -- # clean_kernel_target 00:20:08.318 13:49:10 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:08.318 13:49:10 -- nvmf/common.sh@675 -- # echo 0 00:20:08.318 13:49:10 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:08.318 13:49:10 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:08.318 13:49:11 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:08.318 13:49:11 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:08.318 13:49:11 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:08.318 13:49:11 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:20:08.318 13:49:11 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:09.693 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:09.693 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:09.693 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:11.067 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:20:11.067 13:49:13 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4pI /tmp/spdk.key-null.xEf /tmp/spdk.key-sha256.pvw /tmp/spdk.key-sha384.KL8 /tmp/spdk.key-sha512.2Xy /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:20:11.067 13:49:13 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:12.440 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:12.440 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:12.440 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:12.440 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:12.440 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:12.440 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:12.440 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:12.441 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:12.441 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:12.441 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:12.441 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:12.441 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:12.441 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:12.441 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:12.441 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:12.441 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:12.441 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:12.699 00:20:12.699 real 0m58.963s 00:20:12.699 user 0m57.313s 00:20:12.699 sys 0m7.227s 00:20:12.699 13:49:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.699 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.699 ************************************ 00:20:12.699 END TEST nvmf_auth 00:20:12.699 ************************************ 00:20:12.699 13:49:15 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:20:12.699 13:49:15 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:12.699 13:49:15 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:12.699 13:49:15 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:12.699 13:49:15 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:12.699 13:49:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.699 13:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.699 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.699 ************************************ 00:20:12.699 START TEST nvmf_bdevperf 00:20:12.699 ************************************ 00:20:12.699 13:49:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:12.699 * Looking for test storage... 00:20:12.699 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:12.699 13:49:15 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.699 13:49:15 -- nvmf/common.sh@7 -- # uname -s 00:20:12.699 13:49:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.699 13:49:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.699 13:49:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.699 13:49:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.699 13:49:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.699 13:49:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.699 13:49:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.699 13:49:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.699 13:49:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.699 13:49:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.699 13:49:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:12.699 13:49:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:20:12.699 13:49:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.699 13:49:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.699 13:49:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.699 13:49:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.699 13:49:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:12.699 13:49:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.699 13:49:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.699 13:49:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.699 13:49:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.699 13:49:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.699 13:49:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.699 13:49:15 -- paths/export.sh@5 -- # export PATH 00:20:12.699 13:49:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.699 13:49:15 -- nvmf/common.sh@47 -- # : 0 00:20:12.699 13:49:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.699 13:49:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.699 13:49:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.699 13:49:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.699 13:49:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.699 13:49:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.699 13:49:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.699 13:49:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.699 13:49:15 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.699 13:49:15 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.699 13:49:15 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:12.699 13:49:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:12.699 13:49:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.699 13:49:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:12.699 13:49:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:12.699 13:49:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:12.699 13:49:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.699 13:49:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.699 13:49:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.957 13:49:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:12.957 13:49:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:12.957 13:49:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.957 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:20:15.486 13:49:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.486 13:49:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.486 13:49:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.486 13:49:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.486 13:49:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.486 13:49:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.486 13:49:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.486 13:49:18 -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.486 13:49:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.486 13:49:18 -- nvmf/common.sh@296 -- # e810=() 00:20:15.486 13:49:18 -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.486 13:49:18 -- nvmf/common.sh@297 -- # x722=() 00:20:15.486 13:49:18 -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.486 13:49:18 -- nvmf/common.sh@298 -- # mlx=() 00:20:15.486 13:49:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.486 13:49:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.486 13:49:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.486 13:49:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.486 13:49:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:20:15.486 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:20:15.486 13:49:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.486 13:49:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.486 13:49:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:20:15.486 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:20:15.486 13:49:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.486 13:49:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.486 13:49:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.486 13:49:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.486 13:49:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:15.486 13:49:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.486 13:49:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:20:15.486 Found net devices under 0000:81:00.0: mlx_0_0 00:20:15.486 13:49:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.486 13:49:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.486 13:49:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:15.486 13:49:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.486 13:49:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:20:15.486 Found net devices under 0000:81:00.1: mlx_0_1 00:20:15.486 13:49:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.486 13:49:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:15.486 13:49:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:15.486 13:49:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:15.486 13:49:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:15.486 13:49:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:15.486 13:49:18 -- nvmf/common.sh@58 -- # uname 00:20:15.486 13:49:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:15.486 13:49:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:15.486 13:49:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:15.486 13:49:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:15.486 13:49:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:15.486 13:49:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:15.486 13:49:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:15.486 13:49:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:15.486 13:49:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:15.486 13:49:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:15.486 13:49:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:15.486 13:49:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.486 13:49:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:15.486 13:49:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:15.487 13:49:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.487 13:49:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:15.487 13:49:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@105 -- # continue 2 00:20:15.487 13:49:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@105 -- # continue 2 00:20:15.487 13:49:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:15.487 13:49:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.487 13:49:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:15.487 13:49:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:15.487 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.487 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:20:15.487 altname enp129s0f0np0 00:20:15.487 inet 192.168.100.8/24 scope global mlx_0_0 00:20:15.487 valid_lft forever preferred_lft forever 00:20:15.487 13:49:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:15.487 13:49:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.487 13:49:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:15.487 13:49:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:15.487 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.487 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:20:15.487 altname enp129s0f1np1 00:20:15.487 inet 192.168.100.9/24 scope global mlx_0_1 00:20:15.487 valid_lft forever preferred_lft forever 00:20:15.487 13:49:18 -- nvmf/common.sh@411 -- # return 0 00:20:15.487 13:49:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:15.487 13:49:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:15.487 13:49:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:15.487 13:49:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:15.487 13:49:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.487 13:49:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:15.487 13:49:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:15.487 13:49:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.487 13:49:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:15.487 13:49:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@105 -- # continue 2 00:20:15.487 13:49:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.487 13:49:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.487 13:49:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@105 -- # continue 2 00:20:15.487 13:49:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:15.487 13:49:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.487 13:49:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:15.487 13:49:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.487 13:49:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.487 13:49:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:15.487 192.168.100.9' 00:20:15.487 13:49:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:15.487 192.168.100.9' 00:20:15.487 13:49:18 -- nvmf/common.sh@446 -- # head -n 1 00:20:15.487 13:49:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:15.487 13:49:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:15.487 192.168.100.9' 00:20:15.487 13:49:18 -- nvmf/common.sh@447 -- # tail -n +2 00:20:15.487 13:49:18 -- nvmf/common.sh@447 -- # head -n 1 00:20:15.487 13:49:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:15.487 13:49:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:15.487 13:49:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:15.487 13:49:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:15.487 13:49:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:15.487 13:49:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:15.487 13:49:18 -- host/bdevperf.sh@25 -- # tgt_init 00:20:15.487 13:49:18 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:15.487 13:49:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:15.487 13:49:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:15.487 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:15.487 13:49:18 -- nvmf/common.sh@470 -- # nvmfpid=1204390 00:20:15.487 13:49:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:15.487 13:49:18 -- nvmf/common.sh@471 -- # waitforlisten 1204390 00:20:15.487 13:49:18 -- common/autotest_common.sh@817 -- # '[' -z 1204390 ']' 00:20:15.487 13:49:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.487 13:49:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.487 13:49:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.487 13:49:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.487 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:15.745 [2024-04-18 13:49:18.306759] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:15.745 [2024-04-18 13:49:18.306847] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.745 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.745 [2024-04-18 13:49:18.387590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:15.745 [2024-04-18 13:49:18.512296] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.745 [2024-04-18 13:49:18.512370] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.745 [2024-04-18 13:49:18.512387] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.745 [2024-04-18 13:49:18.512400] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.745 [2024-04-18 13:49:18.512412] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.745 [2024-04-18 13:49:18.512731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.745 [2024-04-18 13:49:18.512787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.745 [2024-04-18 13:49:18.512791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.003 13:49:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.003 13:49:18 -- common/autotest_common.sh@850 -- # return 0 00:20:16.003 13:49:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:16.003 13:49:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:16.003 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.003 13:49:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.003 13:49:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:16.003 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.003 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.003 [2024-04-18 13:49:18.698233] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17707d0/0x1774cc0) succeed. 00:20:16.003 [2024-04-18 13:49:18.710463] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1771d20/0x17b6350) succeed. 00:20:16.264 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.264 13:49:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:16.264 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.264 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.264 Malloc0 00:20:16.264 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.264 13:49:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.264 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.264 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.264 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.264 13:49:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.264 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.264 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.264 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.264 13:49:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.264 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.264 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.264 [2024-04-18 13:49:18.893078] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:16.264 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.264 13:49:18 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:16.264 13:49:18 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:16.264 13:49:18 -- nvmf/common.sh@521 -- # config=() 00:20:16.264 13:49:18 -- nvmf/common.sh@521 -- # local subsystem config 00:20:16.264 13:49:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.264 13:49:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.264 { 00:20:16.264 "params": { 00:20:16.264 "name": "Nvme$subsystem", 00:20:16.264 "trtype": "$TEST_TRANSPORT", 00:20:16.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.264 "adrfam": "ipv4", 00:20:16.264 "trsvcid": "$NVMF_PORT", 00:20:16.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.264 "hdgst": ${hdgst:-false}, 00:20:16.264 "ddgst": ${ddgst:-false} 00:20:16.264 }, 00:20:16.264 "method": "bdev_nvme_attach_controller" 00:20:16.264 } 00:20:16.264 EOF 00:20:16.264 )") 00:20:16.264 13:49:18 -- nvmf/common.sh@543 -- # cat 00:20:16.264 13:49:18 -- nvmf/common.sh@545 -- # jq . 00:20:16.264 13:49:18 -- nvmf/common.sh@546 -- # IFS=, 00:20:16.264 13:49:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:16.264 "params": { 00:20:16.264 "name": "Nvme1", 00:20:16.264 "trtype": "rdma", 00:20:16.264 "traddr": "192.168.100.8", 00:20:16.264 "adrfam": "ipv4", 00:20:16.264 "trsvcid": "4420", 00:20:16.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.264 "hdgst": false, 00:20:16.264 "ddgst": false 00:20:16.264 }, 00:20:16.264 "method": "bdev_nvme_attach_controller" 00:20:16.264 }' 00:20:16.264 [2024-04-18 13:49:18.944899] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:16.264 [2024-04-18 13:49:18.945003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204423 ] 00:20:16.264 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.264 [2024-04-18 13:49:19.031254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.527 [2024-04-18 13:49:19.155641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.785 Running I/O for 1 seconds... 00:20:17.716 00:20:17.716 Latency(us) 00:20:17.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:17.716 Verification LBA range: start 0x0 length 0x4000 00:20:17.716 Nvme1n1 : 1.01 12915.70 50.45 0.00 0.00 9839.79 3276.80 13495.56 00:20:17.716 =================================================================================================================== 00:20:17.716 Total : 12915.70 50.45 0.00 0.00 9839.79 3276.80 13495.56 00:20:17.974 13:49:20 -- host/bdevperf.sh@30 -- # bdevperfpid=1204676 00:20:17.974 13:49:20 -- host/bdevperf.sh@32 -- # sleep 3 00:20:17.974 13:49:20 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:17.974 13:49:20 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:17.974 13:49:20 -- nvmf/common.sh@521 -- # config=() 00:20:17.974 13:49:20 -- nvmf/common.sh@521 -- # local subsystem config 00:20:17.974 13:49:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.974 13:49:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.974 { 00:20:17.974 "params": { 00:20:17.974 "name": "Nvme$subsystem", 00:20:17.974 "trtype": "$TEST_TRANSPORT", 00:20:17.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.974 "adrfam": "ipv4", 00:20:17.974 "trsvcid": "$NVMF_PORT", 00:20:17.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.974 "hdgst": ${hdgst:-false}, 00:20:17.974 "ddgst": ${ddgst:-false} 00:20:17.974 }, 00:20:17.974 "method": "bdev_nvme_attach_controller" 00:20:17.974 } 00:20:17.974 EOF 00:20:17.974 )") 00:20:17.974 13:49:20 -- nvmf/common.sh@543 -- # cat 00:20:17.974 13:49:20 -- nvmf/common.sh@545 -- # jq . 00:20:17.974 13:49:20 -- nvmf/common.sh@546 -- # IFS=, 00:20:17.974 13:49:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:17.974 "params": { 00:20:17.974 "name": "Nvme1", 00:20:17.974 "trtype": "rdma", 00:20:17.974 "traddr": "192.168.100.8", 00:20:17.974 "adrfam": "ipv4", 00:20:17.974 "trsvcid": "4420", 00:20:17.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.974 "hdgst": false, 00:20:17.974 "ddgst": false 00:20:17.974 }, 00:20:17.974 "method": "bdev_nvme_attach_controller" 00:20:17.974 }' 00:20:17.974 [2024-04-18 13:49:20.744825] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:17.974 [2024-04-18 13:49:20.744930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204676 ] 00:20:18.240 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.240 [2024-04-18 13:49:20.833799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.240 [2024-04-18 13:49:20.954432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.496 Running I/O for 15 seconds... 00:20:21.024 13:49:23 -- host/bdevperf.sh@33 -- # kill -9 1204390 00:20:21.024 13:49:23 -- host/bdevperf.sh@35 -- # sleep 3 00:20:21.957 [2024-04-18 13:49:24.722499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.722981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.722998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x39500 00:20:21.957 [2024-04-18 13:49:24.723208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.957 [2024-04-18 13:49:24.723228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.723984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x39500 00:20:21.958 [2024-04-18 13:49:24.724608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.724962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.958 [2024-04-18 13:49:24.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.958 [2024-04-18 13:49:24.725017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.725970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.725987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.726818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.959 [2024-04-18 13:49:24.726833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:bdb0 p:0 m:0 dnr:0 00:20:21.959 [2024-04-18 13:49:24.728840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.960 [2024-04-18 13:49:24.728864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.960 [2024-04-18 13:49:24.728879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:20:21.960 [2024-04-18 13:49:24.728894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.960 [2024-04-18 13:49:24.728966] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:20:21.960 [2024-04-18 13:49:24.732646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.960 [2024-04-18 13:49:24.751761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:21.960 [2024-04-18 13:49:24.755203] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:21.960 [2024-04-18 13:49:24.755234] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:21.960 [2024-04-18 13:49:24.755248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:20:23.329 [2024-04-18 13:49:25.759531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:23.329 [2024-04-18 13:49:25.759568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.329 [2024-04-18 13:49:25.759806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.329 [2024-04-18 13:49:25.759829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.329 [2024-04-18 13:49:25.759845] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:23.329 [2024-04-18 13:49:25.763384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.329 [2024-04-18 13:49:25.771099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.329 [2024-04-18 13:49:25.774150] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:23.329 [2024-04-18 13:49:25.774181] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:23.329 [2024-04-18 13:49:25.774195] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:20:24.259 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1204390 Killed "${NVMF_APP[@]}" "$@" 00:20:24.259 13:49:26 -- host/bdevperf.sh@36 -- # tgt_init 00:20:24.259 13:49:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:24.259 13:49:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:24.259 13:49:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:24.259 13:49:26 -- common/autotest_common.sh@10 -- # set +x 00:20:24.259 13:49:26 -- nvmf/common.sh@470 -- # nvmfpid=1205342 00:20:24.259 13:49:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:24.259 13:49:26 -- nvmf/common.sh@471 -- # waitforlisten 1205342 00:20:24.259 13:49:26 -- common/autotest_common.sh@817 -- # '[' -z 1205342 ']' 00:20:24.259 13:49:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.259 13:49:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:24.259 13:49:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.259 13:49:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:24.259 13:49:26 -- common/autotest_common.sh@10 -- # set +x 00:20:24.259 [2024-04-18 13:49:26.753233] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:24.259 [2024-04-18 13:49:26.753316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.259 [2024-04-18 13:49:26.778474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:24.259 [2024-04-18 13:49:26.778512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.259 [2024-04-18 13:49:26.778747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.259 [2024-04-18 13:49:26.778778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:24.259 [2024-04-18 13:49:26.778794] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:24.259 [2024-04-18 13:49:26.778825] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:24.259 [2024-04-18 13:49:26.782371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.259 [2024-04-18 13:49:26.792604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.259 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.259 [2024-04-18 13:49:26.795968] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:24.259 [2024-04-18 13:49:26.796001] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:24.259 [2024-04-18 13:49:26.796015] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:20:24.259 [2024-04-18 13:49:26.834894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.259 [2024-04-18 13:49:26.955724] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.259 [2024-04-18 13:49:26.955789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.259 [2024-04-18 13:49:26.955805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.259 [2024-04-18 13:49:26.955819] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.259 [2024-04-18 13:49:26.955831] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.259 [2024-04-18 13:49:26.955922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.259 [2024-04-18 13:49:26.955988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.259 [2024-04-18 13:49:26.955993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.517 13:49:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:24.517 13:49:27 -- common/autotest_common.sh@850 -- # return 0 00:20:24.517 13:49:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:24.517 13:49:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 13:49:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.517 13:49:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:24.517 13:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 [2024-04-18 13:49:27.130530] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x60a7d0/0x60ecc0) succeed. 00:20:24.517 [2024-04-18 13:49:27.142630] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x60bd20/0x650350) succeed. 00:20:24.517 13:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.517 13:49:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.517 13:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 Malloc0 00:20:24.517 13:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.517 13:49:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.517 13:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 13:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.517 13:49:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.517 13:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 13:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.517 13:49:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:24.517 13:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.517 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.517 [2024-04-18 13:49:27.313449] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:24.517 13:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.517 13:49:27 -- host/bdevperf.sh@38 -- # wait 1204676 00:20:25.081 [2024-04-18 13:49:27.800233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:25.081 [2024-04-18 13:49:27.800274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.081 [2024-04-18 13:49:27.800508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.081 [2024-04-18 13:49:27.800532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:25.081 [2024-04-18 13:49:27.800549] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:25.081 [2024-04-18 13:49:27.804096] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.081 [2024-04-18 13:49:27.812441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.081 [2024-04-18 13:49:27.873221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:35.052 00:20:35.052 Latency(us) 00:20:35.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:35.052 Verification LBA range: start 0x0 length 0x4000 00:20:35.052 Nvme1n1 : 15.01 9464.97 36.97 7704.29 0.00 7427.17 646.26 1031488.09 00:20:35.052 =================================================================================================================== 00:20:35.052 Total : 9464.97 36.97 7704.29 0.00 7427.17 646.26 1031488.09 00:20:35.053 13:49:36 -- host/bdevperf.sh@39 -- # sync 00:20:35.053 13:49:36 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:35.053 13:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.053 13:49:36 -- common/autotest_common.sh@10 -- # set +x 00:20:35.053 13:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.053 13:49:36 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:35.053 13:49:36 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:35.053 13:49:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:35.053 13:49:36 -- nvmf/common.sh@117 -- # sync 00:20:35.053 13:49:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:35.053 13:49:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:35.053 13:49:36 -- nvmf/common.sh@120 -- # set +e 00:20:35.053 13:49:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.053 13:49:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:35.053 rmmod nvme_rdma 00:20:35.053 rmmod nvme_fabrics 00:20:35.053 13:49:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.053 13:49:36 -- nvmf/common.sh@124 -- # set -e 00:20:35.053 13:49:36 -- nvmf/common.sh@125 -- # return 0 00:20:35.053 13:49:36 -- nvmf/common.sh@478 -- # '[' -n 1205342 ']' 00:20:35.053 13:49:36 -- nvmf/common.sh@479 -- # killprocess 1205342 00:20:35.053 13:49:36 -- common/autotest_common.sh@936 -- # '[' -z 1205342 ']' 00:20:35.053 13:49:36 -- common/autotest_common.sh@940 -- # kill -0 1205342 00:20:35.053 13:49:36 -- common/autotest_common.sh@941 -- # uname 00:20:35.053 13:49:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.053 13:49:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1205342 00:20:35.053 13:49:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:35.053 13:49:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:35.053 13:49:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1205342' 00:20:35.053 killing process with pid 1205342 00:20:35.053 13:49:36 -- common/autotest_common.sh@955 -- # kill 1205342 00:20:35.053 13:49:36 -- common/autotest_common.sh@960 -- # wait 1205342 00:20:35.053 13:49:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:35.053 13:49:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:35.053 00:20:35.053 real 0m21.606s 00:20:35.053 user 1m3.644s 00:20:35.053 sys 0m3.256s 00:20:35.053 13:49:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.053 13:49:37 -- common/autotest_common.sh@10 -- # set +x 00:20:35.053 ************************************ 00:20:35.053 END TEST nvmf_bdevperf 00:20:35.053 ************************************ 00:20:35.053 13:49:37 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:35.053 13:49:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:35.053 13:49:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.053 13:49:37 -- common/autotest_common.sh@10 -- # set +x 00:20:35.053 ************************************ 00:20:35.053 START TEST nvmf_target_disconnect 00:20:35.053 ************************************ 00:20:35.053 13:49:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:35.053 * Looking for test storage... 00:20:35.053 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:35.053 13:49:37 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.053 13:49:37 -- nvmf/common.sh@7 -- # uname -s 00:20:35.053 13:49:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.053 13:49:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.053 13:49:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.053 13:49:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.053 13:49:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.053 13:49:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.053 13:49:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.053 13:49:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.053 13:49:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.053 13:49:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.053 13:49:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:35.053 13:49:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:20:35.053 13:49:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.053 13:49:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.053 13:49:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.053 13:49:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.053 13:49:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:35.053 13:49:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.053 13:49:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.053 13:49:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.053 13:49:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.053 13:49:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.053 13:49:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.053 13:49:37 -- paths/export.sh@5 -- # export PATH 00:20:35.053 13:49:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.053 13:49:37 -- nvmf/common.sh@47 -- # : 0 00:20:35.053 13:49:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.053 13:49:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.053 13:49:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.053 13:49:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.053 13:49:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.053 13:49:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.053 13:49:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.053 13:49:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.053 13:49:37 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:35.053 13:49:37 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:35.053 13:49:37 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:35.053 13:49:37 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:35.053 13:49:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:35.053 13:49:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.053 13:49:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:35.053 13:49:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:35.053 13:49:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:35.053 13:49:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.053 13:49:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.053 13:49:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.053 13:49:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:35.053 13:49:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:35.053 13:49:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.053 13:49:37 -- common/autotest_common.sh@10 -- # set +x 00:20:37.602 13:49:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.602 13:49:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.602 13:49:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.602 13:49:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.602 13:49:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.602 13:49:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.602 13:49:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.602 13:49:39 -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.602 13:49:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.602 13:49:39 -- nvmf/common.sh@296 -- # e810=() 00:20:37.602 13:49:39 -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.602 13:49:39 -- nvmf/common.sh@297 -- # x722=() 00:20:37.602 13:49:39 -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.602 13:49:39 -- nvmf/common.sh@298 -- # mlx=() 00:20:37.602 13:49:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.602 13:49:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.602 13:49:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.602 13:49:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.602 13:49:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:20:37.602 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:20:37.602 13:49:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:37.602 13:49:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.602 13:49:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:20:37.602 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:20:37.602 13:49:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:37.602 13:49:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.602 13:49:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.602 13:49:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.602 13:49:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.602 13:49:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.602 13:49:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:20:37.602 Found net devices under 0000:81:00.0: mlx_0_0 00:20:37.602 13:49:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.602 13:49:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.602 13:49:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.602 13:49:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.602 13:49:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:20:37.602 Found net devices under 0000:81:00.1: mlx_0_1 00:20:37.602 13:49:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.602 13:49:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:37.602 13:49:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:37.602 13:49:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:37.602 13:49:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:37.602 13:49:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:37.602 13:49:39 -- nvmf/common.sh@58 -- # uname 00:20:37.602 13:49:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:37.602 13:49:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:37.602 13:49:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:37.602 13:49:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:37.602 13:49:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:37.602 13:49:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:37.602 13:49:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:37.602 13:49:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:37.602 13:49:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:37.602 13:49:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:37.602 13:49:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:37.602 13:49:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:37.602 13:49:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:37.602 13:49:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:37.602 13:49:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:37.602 13:49:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:37.602 13:49:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:37.602 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.602 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:37.602 13:49:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:37.602 13:49:40 -- nvmf/common.sh@105 -- # continue 2 00:20:37.602 13:49:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:37.602 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.602 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:37.602 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.602 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:37.602 13:49:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:37.602 13:49:40 -- nvmf/common.sh@105 -- # continue 2 00:20:37.602 13:49:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:37.602 13:49:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:37.602 13:49:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:37.602 13:49:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:37.602 13:49:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:37.602 13:49:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:37.602 13:49:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:37.602 13:49:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:37.602 13:49:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:37.602 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:37.602 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:20:37.602 altname enp129s0f0np0 00:20:37.603 inet 192.168.100.8/24 scope global mlx_0_0 00:20:37.603 valid_lft forever preferred_lft forever 00:20:37.603 13:49:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:37.603 13:49:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:37.603 13:49:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:37.603 13:49:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:37.603 13:49:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:37.603 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:37.603 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:20:37.603 altname enp129s0f1np1 00:20:37.603 inet 192.168.100.9/24 scope global mlx_0_1 00:20:37.603 valid_lft forever preferred_lft forever 00:20:37.603 13:49:40 -- nvmf/common.sh@411 -- # return 0 00:20:37.603 13:49:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:37.603 13:49:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:37.603 13:49:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:37.603 13:49:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:37.603 13:49:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:37.603 13:49:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:37.603 13:49:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:37.603 13:49:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:37.603 13:49:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:37.603 13:49:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:37.603 13:49:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:37.603 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.603 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:37.603 13:49:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:37.603 13:49:40 -- nvmf/common.sh@105 -- # continue 2 00:20:37.603 13:49:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:37.603 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.603 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:37.603 13:49:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.603 13:49:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:37.603 13:49:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@105 -- # continue 2 00:20:37.603 13:49:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:37.603 13:49:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:37.603 13:49:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:37.603 13:49:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:37.603 13:49:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:37.603 13:49:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:37.603 13:49:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:37.603 192.168.100.9' 00:20:37.603 13:49:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:37.603 192.168.100.9' 00:20:37.603 13:49:40 -- nvmf/common.sh@446 -- # head -n 1 00:20:37.603 13:49:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:37.603 13:49:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:37.603 192.168.100.9' 00:20:37.603 13:49:40 -- nvmf/common.sh@447 -- # tail -n +2 00:20:37.603 13:49:40 -- nvmf/common.sh@447 -- # head -n 1 00:20:37.603 13:49:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:37.603 13:49:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:37.603 13:49:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:37.603 13:49:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:37.603 13:49:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:37.603 13:49:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:37.603 13:49:40 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:37.603 13:49:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.603 13:49:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.603 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:20:37.603 ************************************ 00:20:37.603 START TEST nvmf_target_disconnect_tc1 00:20:37.603 ************************************ 00:20:37.603 13:49:40 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:37.603 13:49:40 -- host/target_disconnect.sh@32 -- # set +e 00:20:37.603 13:49:40 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:37.603 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.603 [2024-04-18 13:49:40.361208] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:37.603 [2024-04-18 13:49:40.361279] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:37.603 [2024-04-18 13:49:40.361295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7080 00:20:38.973 [2024-04-18 13:49:41.365474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:38.973 [2024-04-18 13:49:41.365529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:38.973 [2024-04-18 13:49:41.365549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:20:38.973 [2024-04-18 13:49:41.365602] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:38.973 [2024-04-18 13:49:41.365620] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:38.973 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:20:38.973 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:38.973 Initializing NVMe Controllers 00:20:38.973 13:49:41 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:38.973 13:49:41 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:38.973 13:49:41 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:38.973 13:49:41 -- common/autotest_common.sh@1139 -- # return 0 00:20:38.973 13:49:41 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:38.973 13:49:41 -- host/target_disconnect.sh@41 -- # set -e 00:20:38.973 00:20:38.973 real 0m1.136s 00:20:38.973 user 0m0.894s 00:20:38.973 sys 0m0.230s 00:20:38.973 13:49:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.973 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:38.973 ************************************ 00:20:38.973 END TEST nvmf_target_disconnect_tc1 00:20:38.973 ************************************ 00:20:38.973 13:49:41 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:38.973 13:49:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:38.973 13:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.973 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:38.973 ************************************ 00:20:38.973 START TEST nvmf_target_disconnect_tc2 00:20:38.973 ************************************ 00:20:38.973 13:49:41 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:38.973 13:49:41 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:20:38.973 13:49:41 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:38.973 13:49:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:38.973 13:49:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:38.973 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:38.973 13:49:41 -- nvmf/common.sh@470 -- # nvmfpid=1208785 00:20:38.973 13:49:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:38.973 13:49:41 -- nvmf/common.sh@471 -- # waitforlisten 1208785 00:20:38.973 13:49:41 -- common/autotest_common.sh@817 -- # '[' -z 1208785 ']' 00:20:38.974 13:49:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.974 13:49:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:38.974 13:49:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.974 13:49:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:38.974 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:38.974 [2024-04-18 13:49:41.579817] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:38.974 [2024-04-18 13:49:41.579912] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.974 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.974 [2024-04-18 13:49:41.659876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.231 [2024-04-18 13:49:41.782747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.232 [2024-04-18 13:49:41.782802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.232 [2024-04-18 13:49:41.782818] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.232 [2024-04-18 13:49:41.782832] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.232 [2024-04-18 13:49:41.782845] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.232 [2024-04-18 13:49:41.782947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:39.232 [2024-04-18 13:49:41.783002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:39.232 [2024-04-18 13:49:41.783053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:39.232 [2024-04-18 13:49:41.783057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.232 13:49:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:39.232 13:49:41 -- common/autotest_common.sh@850 -- # return 0 00:20:39.232 13:49:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:39.232 13:49:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.232 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.232 13:49:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.232 13:49:41 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:39.232 13:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.232 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.232 Malloc0 00:20:39.232 13:49:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.232 13:49:41 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:39.232 13:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.232 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.232 [2024-04-18 13:49:42.003807] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f5c200/0x1f67dc0) succeed. 00:20:39.232 [2024-04-18 13:49:42.016484] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f5d7f0/0x2007ec0) succeed. 00:20:39.489 13:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.489 13:49:42 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.489 13:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.489 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 13:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.489 13:49:42 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.489 13:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.489 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 13:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.489 13:49:42 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:39.489 13:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.489 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 [2024-04-18 13:49:42.200656] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:39.489 13:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.489 13:49:42 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:39.489 13:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.489 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 13:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.489 13:49:42 -- host/target_disconnect.sh@50 -- # reconnectpid=1208818 00:20:39.489 13:49:42 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:39.489 13:49:42 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:39.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.013 13:49:44 -- host/target_disconnect.sh@53 -- # kill -9 1208785 00:20:42.013 13:49:44 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Write completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.945 starting I/O failed 00:20:42.945 Read completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Write completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Read completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Write completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Read completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Read completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Read completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Write completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Write completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 Write completed with error (sct=0, sc=8) 00:20:42.946 starting I/O failed 00:20:42.946 [2024-04-18 13:49:45.405352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:43.510 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1208785 Killed "${NVMF_APP[@]}" "$@" 00:20:43.510 13:49:46 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:20:43.510 13:49:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:43.510 13:49:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:43.510 13:49:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.510 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:43.510 13:49:46 -- nvmf/common.sh@470 -- # nvmfpid=1209343 00:20:43.510 13:49:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:43.510 13:49:46 -- nvmf/common.sh@471 -- # waitforlisten 1209343 00:20:43.510 13:49:46 -- common/autotest_common.sh@817 -- # '[' -z 1209343 ']' 00:20:43.510 13:49:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.510 13:49:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.510 13:49:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.510 13:49:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.510 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:43.510 [2024-04-18 13:49:46.265845] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:43.510 [2024-04-18 13:49:46.265935] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.510 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.768 [2024-04-18 13:49:46.347973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Read completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.768 Write completed with error (sct=0, sc=8) 00:20:43.768 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Read completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 Write completed with error (sct=0, sc=8) 00:20:43.769 starting I/O failed 00:20:43.769 [2024-04-18 13:49:46.410759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:43.769 [2024-04-18 13:49:46.412683] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:43.769 [2024-04-18 13:49:46.412717] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:43.769 [2024-04-18 13:49:46.412732] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:43.769 [2024-04-18 13:49:46.470622] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.769 [2024-04-18 13:49:46.470688] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.769 [2024-04-18 13:49:46.470705] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.769 [2024-04-18 13:49:46.470718] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.769 [2024-04-18 13:49:46.470730] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.769 [2024-04-18 13:49:46.470817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:43.769 [2024-04-18 13:49:46.470873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:43.769 [2024-04-18 13:49:46.470925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:43.769 [2024-04-18 13:49:46.470929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.026 13:49:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.026 13:49:46 -- common/autotest_common.sh@850 -- # return 0 00:20:44.026 13:49:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:44.026 13:49:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.026 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.026 13:49:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.026 13:49:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:44.026 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.026 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.026 Malloc0 00:20:44.026 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.026 13:49:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:44.027 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.027 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.027 [2024-04-18 13:49:46.680030] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21fb200/0x2206dc0) succeed. 00:20:44.027 [2024-04-18 13:49:46.692499] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21fc7f0/0x22a6ec0) succeed. 00:20:44.284 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.284 13:49:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.284 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.284 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.284 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.284 13:49:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:44.284 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.284 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.284 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.284 13:49:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:44.284 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.284 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.284 [2024-04-18 13:49:46.880741] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:44.284 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.284 13:49:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:44.284 13:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.284 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:20:44.284 13:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.284 13:49:46 -- host/target_disconnect.sh@58 -- # wait 1208818 00:20:44.849 [2024-04-18 13:49:47.416909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.849 qpair failed and we were unable to recover it. 00:20:44.849 [2024-04-18 13:49:47.430739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.849 [2024-04-18 13:49:47.430826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.849 [2024-04-18 13:49:47.430863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.849 [2024-04-18 13:49:47.430881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.849 [2024-04-18 13:49:47.430895] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.849 [2024-04-18 13:49:47.440695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.450658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.450722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.450754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.450770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.450783] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.460778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.470488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.470548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.470581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.470597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.470611] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.480871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.490422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.490491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.490520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.490536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.490550] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.500703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.510585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.510647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.510679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.510696] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.510716] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.520920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.530762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.530825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.530854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.530870] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.530884] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.540966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.550610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.550671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.550700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.550716] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.550730] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.561008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.570622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.570690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.570718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.570734] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.570748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.580969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.590729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.590801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.590834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.590851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.590864] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.601114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.610947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.611018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.611048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.611064] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.611076] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.621183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.630742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.630801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.630835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.630852] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.630865] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:44.850 [2024-04-18 13:49:47.641228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.850 qpair failed and we were unable to recover it. 00:20:44.850 [2024-04-18 13:49:47.651091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.850 [2024-04-18 13:49:47.651165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.850 [2024-04-18 13:49:47.651195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.850 [2024-04-18 13:49:47.651211] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.850 [2024-04-18 13:49:47.651225] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.109 [2024-04-18 13:49:47.661243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.109 qpair failed and we were unable to recover it. 00:20:45.109 [2024-04-18 13:49:47.671156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.109 [2024-04-18 13:49:47.671230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.109 [2024-04-18 13:49:47.671264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.109 [2024-04-18 13:49:47.671280] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.109 [2024-04-18 13:49:47.671293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.681527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.690862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.690925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.690965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.690991] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.691005] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.701280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.710828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.710893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.710926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.710952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.710969] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.721230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.731006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.731081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.731114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.731130] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.731144] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.741653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.751043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.751110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.751139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.751155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.751169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.761562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.771230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.771299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.771329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.771344] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.771358] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.781536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.791185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.791248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.791278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.791295] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.791308] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.801680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.811326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.811393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.811426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.811442] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.811456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.821746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.831293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.831365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.831394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.831410] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.831424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.841685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.851381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.851447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.851477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.851493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.851506] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.861707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.871405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.871467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.871505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.871522] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.871536] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.881906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.891540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.891608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.891640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.891657] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.110 [2024-04-18 13:49:47.891671] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.110 [2024-04-18 13:49:47.901795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.110 qpair failed and we were unable to recover it. 00:20:45.110 [2024-04-18 13:49:47.911567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.110 [2024-04-18 13:49:47.911629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.110 [2024-04-18 13:49:47.911662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.110 [2024-04-18 13:49:47.911678] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.111 [2024-04-18 13:49:47.911692] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:47.921858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:47.931613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:47.931677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:47.931709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:47.931724] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:47.931738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:47.942122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:47.951635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:47.951695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:47.951725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:47.951741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:47.951760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:47.962120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:47.971765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:47.971834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:47.971863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:47.971879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:47.971892] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:47.982088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:47.991796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:47.991869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:47.991902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:47.991919] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:47.991933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:48.002338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:48.011829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:48.011894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:48.011924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:48.011951] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:48.011968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:48.022268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:48.031811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:48.031873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:48.031907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:48.031923] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:48.031947] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:48.042305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:48.052002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:48.052081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:48.052115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.369 [2024-04-18 13:49:48.052133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.369 [2024-04-18 13:49:48.052146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.369 [2024-04-18 13:49:48.062398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.369 qpair failed and we were unable to recover it. 00:20:45.369 [2024-04-18 13:49:48.072156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.369 [2024-04-18 13:49:48.072228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.369 [2024-04-18 13:49:48.072258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.370 [2024-04-18 13:49:48.072275] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.370 [2024-04-18 13:49:48.072288] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.370 [2024-04-18 13:49:48.082320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.370 qpair failed and we were unable to recover it. 00:20:45.370 [2024-04-18 13:49:48.092011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.370 [2024-04-18 13:49:48.092078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.370 [2024-04-18 13:49:48.092112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.370 [2024-04-18 13:49:48.092128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.370 [2024-04-18 13:49:48.092142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.370 [2024-04-18 13:49:48.102493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.370 qpair failed and we were unable to recover it. 00:20:45.370 [2024-04-18 13:49:48.112776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.370 [2024-04-18 13:49:48.112833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.370 [2024-04-18 13:49:48.112863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.370 [2024-04-18 13:49:48.112879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.370 [2024-04-18 13:49:48.112893] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.370 [2024-04-18 13:49:48.122614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.370 qpair failed and we were unable to recover it. 00:20:45.370 [2024-04-18 13:49:48.132225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.370 [2024-04-18 13:49:48.132296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.370 [2024-04-18 13:49:48.132326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.370 [2024-04-18 13:49:48.132348] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.370 [2024-04-18 13:49:48.132362] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.370 [2024-04-18 13:49:48.142552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.370 qpair failed and we were unable to recover it. 00:20:45.370 [2024-04-18 13:49:48.152254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.370 [2024-04-18 13:49:48.152322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.370 [2024-04-18 13:49:48.152352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.370 [2024-04-18 13:49:48.152368] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.370 [2024-04-18 13:49:48.152382] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.370 [2024-04-18 13:49:48.162619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.370 qpair failed and we were unable to recover it. 00:20:45.628 [2024-04-18 13:49:48.172350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.628 [2024-04-18 13:49:48.172412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.628 [2024-04-18 13:49:48.172442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.628 [2024-04-18 13:49:48.172457] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.628 [2024-04-18 13:49:48.172471] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.628 [2024-04-18 13:49:48.182708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.628 qpair failed and we were unable to recover it. 00:20:45.628 [2024-04-18 13:49:48.192470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.628 [2024-04-18 13:49:48.192533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.628 [2024-04-18 13:49:48.192565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.628 [2024-04-18 13:49:48.192581] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.628 [2024-04-18 13:49:48.192595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.628 [2024-04-18 13:49:48.202794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.628 qpair failed and we were unable to recover it. 00:20:45.628 [2024-04-18 13:49:48.212411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.628 [2024-04-18 13:49:48.212477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.628 [2024-04-18 13:49:48.212509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.628 [2024-04-18 13:49:48.212525] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.628 [2024-04-18 13:49:48.212539] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.628 [2024-04-18 13:49:48.222895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.628 qpair failed and we were unable to recover it. 00:20:45.628 [2024-04-18 13:49:48.232509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.232581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.232614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.232630] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.232644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.242727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.252526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.252590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.252622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.252638] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.252651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.262778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.272578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.272641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.272672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.272688] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.272703] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.283181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.292536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.292603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.292633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.292649] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.292662] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.303263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.312982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.313052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.313091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.313108] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.313123] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.322925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.332722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.332788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.332818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.332833] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.332847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.343240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.352879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.352950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.352984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.353000] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.353014] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.363289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.372902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.372986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.373019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.373035] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.373049] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.383639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.393068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.393137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.393167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.393183] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.393202] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.403373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.629 [2024-04-18 13:49:48.413141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.629 [2024-04-18 13:49:48.413200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.629 [2024-04-18 13:49:48.413230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.629 [2024-04-18 13:49:48.413245] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.629 [2024-04-18 13:49:48.413258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.629 [2024-04-18 13:49:48.423532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.629 qpair failed and we were unable to recover it. 00:20:45.888 [2024-04-18 13:49:48.433136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.888 [2024-04-18 13:49:48.433200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.888 [2024-04-18 13:49:48.433229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.888 [2024-04-18 13:49:48.433245] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.888 [2024-04-18 13:49:48.433258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.443445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.453112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.453185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.453213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.453228] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.453241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.463617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.473233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.473306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.473336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.473351] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.473365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.483704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.493342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.493409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.493442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.493458] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.493472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.503746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.513508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.513564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.513596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.513613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.513626] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.523868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.533512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.533580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.533611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.533628] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.533641] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.543974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.553721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.553794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.553827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.553843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.553857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.564045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.573631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.573695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.573724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.573746] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.573760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.584121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.593687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.593748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.593781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.593798] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.593811] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.604055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.613812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.613879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.613909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.613925] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.613948] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.624262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.634014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.634089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.634122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.634139] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.634153] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.644284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.653968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.654030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.654063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.654080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.654093] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.664362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:45.889 [2024-04-18 13:49:48.674171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:45.889 [2024-04-18 13:49:48.674233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:45.889 [2024-04-18 13:49:48.674263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:45.889 [2024-04-18 13:49:48.674278] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:45.889 [2024-04-18 13:49:48.674292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:45.889 [2024-04-18 13:49:48.684301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:45.889 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.694120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.694193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.694226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.694242] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.694255] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.704422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.714097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.714175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.714205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.714221] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.714234] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.724529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.734141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.734212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.734242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.734258] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.734272] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.744728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.754263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.754329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.754367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.754384] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.754397] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.764570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.774387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.774460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.774490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.774506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.774519] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.784748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.794336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.794409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.794441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.794457] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.794472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.804716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.814337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.814401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.814434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.814450] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.814463] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.824817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.834470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.834530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.834562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.834579] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.834599] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.844814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.854531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.854600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.854632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.854648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.854662] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.864844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.148 qpair failed and we were unable to recover it. 00:20:46.148 [2024-04-18 13:49:48.874622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.148 [2024-04-18 13:49:48.874690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.148 [2024-04-18 13:49:48.874722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.148 [2024-04-18 13:49:48.874739] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.148 [2024-04-18 13:49:48.874753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.148 [2024-04-18 13:49:48.884895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.149 qpair failed and we were unable to recover it. 00:20:46.149 [2024-04-18 13:49:48.894755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.149 [2024-04-18 13:49:48.894823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.149 [2024-04-18 13:49:48.894855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.149 [2024-04-18 13:49:48.894871] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.149 [2024-04-18 13:49:48.894884] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.149 [2024-04-18 13:49:48.905046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.149 qpair failed and we were unable to recover it. 00:20:46.149 [2024-04-18 13:49:48.914708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.149 [2024-04-18 13:49:48.914771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.149 [2024-04-18 13:49:48.914799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.149 [2024-04-18 13:49:48.914815] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.149 [2024-04-18 13:49:48.914828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.149 [2024-04-18 13:49:48.924880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.149 qpair failed and we were unable to recover it. 00:20:46.149 [2024-04-18 13:49:48.934891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.149 [2024-04-18 13:49:48.934975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.149 [2024-04-18 13:49:48.935006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.149 [2024-04-18 13:49:48.935021] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.149 [2024-04-18 13:49:48.935035] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.149 [2024-04-18 13:49:48.945202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.149 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:48.954896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:48.954990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:48.955023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:48.955040] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:48.955053] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:48.965218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:48.975014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:48.975075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:48.975108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:48.975125] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:48.975139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:48.985592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:48.995132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:48.995196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:48.995226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:48.995242] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:48.995255] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.005581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.015133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.015206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.015235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.015259] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.015273] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.025447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.034977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.035050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.035080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.035096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.035110] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.045354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.055173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.055239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.055268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.055284] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.055298] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.065604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.075279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.075343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.075374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.075390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.075403] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.085495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.095358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.095431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.095462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.095477] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.095491] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.105601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.115402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.115475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.115505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.115520] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.115534] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.407 [2024-04-18 13:49:49.125498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.407 qpair failed and we were unable to recover it. 00:20:46.407 [2024-04-18 13:49:49.135268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.407 [2024-04-18 13:49:49.135333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.407 [2024-04-18 13:49:49.135363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.407 [2024-04-18 13:49:49.135379] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.407 [2024-04-18 13:49:49.135392] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.408 [2024-04-18 13:49:49.145755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.408 qpair failed and we were unable to recover it. 00:20:46.408 [2024-04-18 13:49:49.155284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.408 [2024-04-18 13:49:49.155346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.408 [2024-04-18 13:49:49.155378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.408 [2024-04-18 13:49:49.155394] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.408 [2024-04-18 13:49:49.155407] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.408 [2024-04-18 13:49:49.165736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.408 qpair failed and we were unable to recover it. 00:20:46.408 [2024-04-18 13:49:49.175436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.408 [2024-04-18 13:49:49.175504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.408 [2024-04-18 13:49:49.175536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.408 [2024-04-18 13:49:49.175553] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.408 [2024-04-18 13:49:49.175567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.408 [2024-04-18 13:49:49.185764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.408 qpair failed and we were unable to recover it. 00:20:46.408 [2024-04-18 13:49:49.195526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.408 [2024-04-18 13:49:49.195598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.408 [2024-04-18 13:49:49.195636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.408 [2024-04-18 13:49:49.195653] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.408 [2024-04-18 13:49:49.195666] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.408 [2024-04-18 13:49:49.205842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.408 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.215521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.215582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.215614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.215630] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.215644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.225878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.235456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.235520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.235552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.235568] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.235582] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.245778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.255565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.255633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.255666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.255682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.255696] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.265866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.275698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.275769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.275801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.275818] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.275841] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.285960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.295729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.295794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.295823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.295838] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.295851] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.306175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.315763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.315826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.315856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.315872] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.315885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.326127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.335742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.335816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.335845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.335861] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.335874] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.346249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.355910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.355991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.356025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.356042] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.666 [2024-04-18 13:49:49.356055] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.666 [2024-04-18 13:49:49.366308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.666 qpair failed and we were unable to recover it. 00:20:46.666 [2024-04-18 13:49:49.376138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.666 [2024-04-18 13:49:49.376207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.666 [2024-04-18 13:49:49.376237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.666 [2024-04-18 13:49:49.376253] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.667 [2024-04-18 13:49:49.376266] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.667 [2024-04-18 13:49:49.386559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.667 qpair failed and we were unable to recover it. 00:20:46.667 [2024-04-18 13:49:49.396149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.667 [2024-04-18 13:49:49.396212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.667 [2024-04-18 13:49:49.396242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.667 [2024-04-18 13:49:49.396258] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.667 [2024-04-18 13:49:49.396271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.667 [2024-04-18 13:49:49.406708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.667 qpair failed and we were unable to recover it. 00:20:46.667 [2024-04-18 13:49:49.416338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.667 [2024-04-18 13:49:49.416410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.667 [2024-04-18 13:49:49.416443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.667 [2024-04-18 13:49:49.416459] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.667 [2024-04-18 13:49:49.416472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.667 [2024-04-18 13:49:49.426561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.667 qpair failed and we were unable to recover it. 00:20:46.667 [2024-04-18 13:49:49.436347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.667 [2024-04-18 13:49:49.436422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.667 [2024-04-18 13:49:49.436452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.667 [2024-04-18 13:49:49.436468] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.667 [2024-04-18 13:49:49.436481] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.667 [2024-04-18 13:49:49.446816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.667 qpair failed and we were unable to recover it. 00:20:46.667 [2024-04-18 13:49:49.456422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.667 [2024-04-18 13:49:49.456487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.667 [2024-04-18 13:49:49.456521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.667 [2024-04-18 13:49:49.456543] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.667 [2024-04-18 13:49:49.456558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.667 [2024-04-18 13:49:49.466691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.667 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.476390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.476452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.476483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.476499] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.476513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.486604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.496358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.496427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.496456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.496472] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.496486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.506732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.516426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.516492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.516525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.516540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.516554] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.526740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.536400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.536457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.536489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.536506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.536520] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.546851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.556444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.556504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.556536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.556553] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.556567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.566828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.576623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.576691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.576724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.576740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.576753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.586992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.596628] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.596698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.596727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.596743] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.596757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.607045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.616724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.616789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.925 [2024-04-18 13:49:49.616819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.925 [2024-04-18 13:49:49.616834] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.925 [2024-04-18 13:49:49.616847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.925 [2024-04-18 13:49:49.626991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.925 qpair failed and we were unable to recover it. 00:20:46.925 [2024-04-18 13:49:49.636679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.925 [2024-04-18 13:49:49.636738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.926 [2024-04-18 13:49:49.636772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.926 [2024-04-18 13:49:49.636788] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.926 [2024-04-18 13:49:49.636802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.926 [2024-04-18 13:49:49.647163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.926 qpair failed and we were unable to recover it. 00:20:46.926 [2024-04-18 13:49:49.656840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.926 [2024-04-18 13:49:49.656912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.926 [2024-04-18 13:49:49.656954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.926 [2024-04-18 13:49:49.656972] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.926 [2024-04-18 13:49:49.656986] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.926 [2024-04-18 13:49:49.667245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.926 qpair failed and we were unable to recover it. 00:20:46.926 [2024-04-18 13:49:49.676874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.926 [2024-04-18 13:49:49.676953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.926 [2024-04-18 13:49:49.676986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.926 [2024-04-18 13:49:49.677002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.926 [2024-04-18 13:49:49.677016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.926 [2024-04-18 13:49:49.687153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.926 qpair failed and we were unable to recover it. 00:20:46.926 [2024-04-18 13:49:49.696974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.926 [2024-04-18 13:49:49.697040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.926 [2024-04-18 13:49:49.697073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.926 [2024-04-18 13:49:49.697090] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.926 [2024-04-18 13:49:49.697103] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.926 [2024-04-18 13:49:49.707263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.926 qpair failed and we were unable to recover it. 00:20:46.926 [2024-04-18 13:49:49.717050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:46.926 [2024-04-18 13:49:49.717109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:46.926 [2024-04-18 13:49:49.717142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:46.926 [2024-04-18 13:49:49.717158] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:46.926 [2024-04-18 13:49:49.717178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:46.926 [2024-04-18 13:49:49.727401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.926 qpair failed and we were unable to recover it. 00:20:47.184 [2024-04-18 13:49:49.737049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.184 [2024-04-18 13:49:49.737121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.184 [2024-04-18 13:49:49.737151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.184 [2024-04-18 13:49:49.737166] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.184 [2024-04-18 13:49:49.737180] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.184 [2024-04-18 13:49:49.747544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.184 qpair failed and we were unable to recover it. 00:20:47.184 [2024-04-18 13:49:49.757149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.184 [2024-04-18 13:49:49.757222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.184 [2024-04-18 13:49:49.757252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.184 [2024-04-18 13:49:49.757268] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.184 [2024-04-18 13:49:49.757281] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.184 [2024-04-18 13:49:49.767507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.184 qpair failed and we were unable to recover it. 00:20:47.184 [2024-04-18 13:49:49.777209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.184 [2024-04-18 13:49:49.777272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.184 [2024-04-18 13:49:49.777302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.184 [2024-04-18 13:49:49.777318] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.184 [2024-04-18 13:49:49.777332] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.787494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.797187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.797246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.797276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.797292] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.797305] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.807615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.817249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.817325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.817355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.817371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.817384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.827693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.837277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.837341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.837373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.837390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.837404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.847836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.857398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.857457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.857489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.857505] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.857519] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.867829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.877430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.877486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.877518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.877535] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.877548] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.887945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.897495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.897560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.897592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.897615] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.897629] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.907879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.917528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.917602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.917633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.917650] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.917663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.927983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.937575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.937644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.937674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.937689] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.937703] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.948131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.957688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.957751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.957780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.957795] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.957809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.185 [2024-04-18 13:49:49.968276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.185 qpair failed and we were unable to recover it. 00:20:47.185 [2024-04-18 13:49:49.977834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.185 [2024-04-18 13:49:49.977906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.185 [2024-04-18 13:49:49.977947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.185 [2024-04-18 13:49:49.977966] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.185 [2024-04-18 13:49:49.977980] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:49.988213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:49.997775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:49.997845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:49.997879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:49.997896] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:49.997909] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.008274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.017852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.017930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.017968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.017986] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.017999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.028423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.037900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.037983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.038013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.038029] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.038043] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.048407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.057975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.058046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.058079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.058097] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.058111] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.068680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.078105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.078178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.078219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.078237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.078252] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.088430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.098136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.098208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.098241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.098258] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.098271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.108793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.118103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.118166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.118198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.118214] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.118228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.128659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.138197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.138268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.138297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.138313] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.138326] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.148608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.158341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.158406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.158437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.158453] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.158472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.168850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.178408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.178476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.178508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.178524] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.178537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.188658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.198500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.198562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.198592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.198608] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.198622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.208921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.218529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.218598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.218629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.218645] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.444 [2024-04-18 13:49:50.218659] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.444 [2024-04-18 13:49:50.228856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.444 qpair failed and we were unable to recover it. 00:20:47.444 [2024-04-18 13:49:50.238615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.444 [2024-04-18 13:49:50.238687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.444 [2024-04-18 13:49:50.238717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.444 [2024-04-18 13:49:50.238733] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.445 [2024-04-18 13:49:50.238747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.702 [2024-04-18 13:49:50.248921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.258599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.258666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.258698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.258715] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.258728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.268852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.278649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.278709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.278741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.278758] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.278771] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.289164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.298821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.298896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.298926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.298951] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.298967] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.308936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.318898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.318984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.319021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.319038] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.319051] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.329297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.338987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.339052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.339086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.339109] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.339123] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.349311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.359017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.359082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.359115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.359132] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.359145] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.369299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.379141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.379215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.379248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.379265] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.379278] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.389390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.399345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.399420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.399453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.399469] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.399483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.409558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.419082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.419142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.419172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.703 [2024-04-18 13:49:50.419188] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.703 [2024-04-18 13:49:50.419201] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.703 [2024-04-18 13:49:50.429170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.703 qpair failed and we were unable to recover it. 00:20:47.703 [2024-04-18 13:49:50.439294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.703 [2024-04-18 13:49:50.439369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.703 [2024-04-18 13:49:50.439399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.704 [2024-04-18 13:49:50.439415] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.704 [2024-04-18 13:49:50.439429] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.704 [2024-04-18 13:49:50.449684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.704 qpair failed and we were unable to recover it. 00:20:47.704 [2024-04-18 13:49:50.459400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.704 [2024-04-18 13:49:50.459473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.704 [2024-04-18 13:49:50.459502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.704 [2024-04-18 13:49:50.459518] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.704 [2024-04-18 13:49:50.459533] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.704 [2024-04-18 13:49:50.469495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.704 qpair failed and we were unable to recover it. 00:20:47.704 [2024-04-18 13:49:50.479334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.704 [2024-04-18 13:49:50.479403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.704 [2024-04-18 13:49:50.479433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.704 [2024-04-18 13:49:50.479450] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.704 [2024-04-18 13:49:50.479463] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.704 [2024-04-18 13:49:50.489765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.704 qpair failed and we were unable to recover it. 00:20:47.704 [2024-04-18 13:49:50.499394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.704 [2024-04-18 13:49:50.499454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.704 [2024-04-18 13:49:50.499486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.704 [2024-04-18 13:49:50.499502] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.704 [2024-04-18 13:49:50.499516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.964 [2024-04-18 13:49:50.509898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.964 qpair failed and we were unable to recover it. 00:20:47.964 [2024-04-18 13:49:50.519405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.964 [2024-04-18 13:49:50.519467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.964 [2024-04-18 13:49:50.519505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.964 [2024-04-18 13:49:50.519523] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.964 [2024-04-18 13:49:50.519537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.964 [2024-04-18 13:49:50.529814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.964 qpair failed and we were unable to recover it. 00:20:47.964 [2024-04-18 13:49:50.539457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.539524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.539555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.539572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.539586] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.549801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.559616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.559689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.559721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.559737] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.559750] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.569769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.579645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.579706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.579738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.579754] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.579767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.589900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.599795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.599852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.599885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.599901] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.599920] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.610074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.619906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.619982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.620011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.620026] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.620040] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.630269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.640005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.640074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.640107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.640123] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.640136] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.652415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.660141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.660208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.660243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.660259] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.660273] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.670191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.679996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.680058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.680091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.680107] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.680120] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.690313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.700209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.700288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.700321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.700337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.700351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.710569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.720208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.720280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.720313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.720328] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.720342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.730339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.740406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.740470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.740503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.740520] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.740534] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:47.965 [2024-04-18 13:49:50.750804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:47.965 qpair failed and we were unable to recover it. 00:20:47.965 [2024-04-18 13:49:50.760074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:47.965 [2024-04-18 13:49:50.760142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:47.965 [2024-04-18 13:49:50.760174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:47.965 [2024-04-18 13:49:50.760190] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:47.965 [2024-04-18 13:49:50.760204] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.770488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.780279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.780351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.780381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.780404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.224 [2024-04-18 13:49:50.780418] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.790736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.800412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.800480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.800512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.800529] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.224 [2024-04-18 13:49:50.800542] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.810814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.820466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.820525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.820557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.820574] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.224 [2024-04-18 13:49:50.820587] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.830727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.840397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.840461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.840494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.840510] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.224 [2024-04-18 13:49:50.840523] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.850931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.860486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.860554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.860587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.860603] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.224 [2024-04-18 13:49:50.860616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.224 [2024-04-18 13:49:50.870878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.224 qpair failed and we were unable to recover it. 00:20:48.224 [2024-04-18 13:49:50.880564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.224 [2024-04-18 13:49:50.880628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.224 [2024-04-18 13:49:50.880660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.224 [2024-04-18 13:49:50.880676] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.880690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.890982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:50.900634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:50.900697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:50.900726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:50.900742] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.900755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.910883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:50.920610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:50.920672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:50.920704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:50.920720] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.920733] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.931068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:50.940768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:50.940834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:50.940863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:50.940879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.940892] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.950979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:50.960786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:50.960858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:50.960895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:50.960912] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.960925] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.971365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:50.980930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:50.981003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:50.981032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:50.981048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:50.981061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:50.991248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:51.000870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:51.000931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:51.000973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:51.000991] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:51.001005] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.225 [2024-04-18 13:49:51.011382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.225 qpair failed and we were unable to recover it. 00:20:48.225 [2024-04-18 13:49:51.020881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.225 [2024-04-18 13:49:51.020965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.225 [2024-04-18 13:49:51.020995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.225 [2024-04-18 13:49:51.021012] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.225 [2024-04-18 13:49:51.021025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.031302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.041005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.041079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.041109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.041124] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.041144] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.051326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.061140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.061205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.061235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.061251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.061264] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.071652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.081241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.081303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.081333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.081349] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.081363] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.091502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.101474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.101548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.101578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.101594] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.101608] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.111631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.121604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.121669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.121699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.121715] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.121728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.131528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.141452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.141526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.141557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.141572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.141586] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.151661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.161504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.484 [2024-04-18 13:49:51.161567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.484 [2024-04-18 13:49:51.161598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.484 [2024-04-18 13:49:51.161613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.484 [2024-04-18 13:49:51.161627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.484 [2024-04-18 13:49:51.171869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.484 qpair failed and we were unable to recover it. 00:20:48.484 [2024-04-18 13:49:51.181370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.181438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.181469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.181485] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.181499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.485 [2024-04-18 13:49:51.191858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.485 qpair failed and we were unable to recover it. 00:20:48.485 [2024-04-18 13:49:51.201672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.201739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.201771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.201787] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.201801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.485 [2024-04-18 13:49:51.211856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.485 qpair failed and we were unable to recover it. 00:20:48.485 [2024-04-18 13:49:51.221690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.221752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.221781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.221804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.221818] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.485 [2024-04-18 13:49:51.231824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.485 qpair failed and we were unable to recover it. 00:20:48.485 [2024-04-18 13:49:51.241789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.241850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.241878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.241894] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.241908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.485 [2024-04-18 13:49:51.252167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.485 qpair failed and we were unable to recover it. 00:20:48.485 [2024-04-18 13:49:51.261551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.261620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.261655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.261671] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.261685] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.485 [2024-04-18 13:49:51.272135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.485 qpair failed and we were unable to recover it. 00:20:48.485 [2024-04-18 13:49:51.281709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.485 [2024-04-18 13:49:51.281784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.485 [2024-04-18 13:49:51.281817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.485 [2024-04-18 13:49:51.281833] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.485 [2024-04-18 13:49:51.281847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.292059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.301708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.301773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.301803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.301819] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.301832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.312265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.321688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.321750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.321784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.321800] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.321813] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.332161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.341872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.341954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.341987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.342004] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.342017] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.352118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.361902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.361988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.362018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.362033] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.362047] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.372262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.382046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.382108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.382142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.382158] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.382171] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.392532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.402042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.743 [2024-04-18 13:49:51.402102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.743 [2024-04-18 13:49:51.402138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.743 [2024-04-18 13:49:51.402154] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.743 [2024-04-18 13:49:51.402167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.743 [2024-04-18 13:49:51.412463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.743 qpair failed and we were unable to recover it. 00:20:48.743 [2024-04-18 13:49:51.422239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.422310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.422343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.422359] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.422373] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.432690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.442185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.442252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.442285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.442301] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.442315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.452559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.462245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.462308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.462337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.462353] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.462366] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.472654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.482284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.482350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.482380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.482397] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.482417] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.492433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.502253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.502324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.502354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.502371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.502384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.512731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.522340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.522412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.522444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.522460] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.522473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:48.744 [2024-04-18 13:49:51.532725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.744 qpair failed and we were unable to recover it. 00:20:48.744 [2024-04-18 13:49:51.542367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:48.744 [2024-04-18 13:49:51.542431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:48.744 [2024-04-18 13:49:51.542464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:48.744 [2024-04-18 13:49:51.542480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:48.744 [2024-04-18 13:49:51.542493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.002 [2024-04-18 13:49:51.552757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.002 qpair failed and we were unable to recover it. 00:20:49.002 [2024-04-18 13:49:51.562507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.002 [2024-04-18 13:49:51.562571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.002 [2024-04-18 13:49:51.562603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.002 [2024-04-18 13:49:51.562619] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.002 [2024-04-18 13:49:51.562632] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.002 [2024-04-18 13:49:51.572765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.002 qpair failed and we were unable to recover it. 00:20:49.002 [2024-04-18 13:49:51.582573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.002 [2024-04-18 13:49:51.582649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.002 [2024-04-18 13:49:51.582681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.002 [2024-04-18 13:49:51.582697] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.002 [2024-04-18 13:49:51.582711] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.002 [2024-04-18 13:49:51.592804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.002 qpair failed and we were unable to recover it. 00:20:49.002 [2024-04-18 13:49:51.602594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.002 [2024-04-18 13:49:51.602665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.002 [2024-04-18 13:49:51.602697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.002 [2024-04-18 13:49:51.602713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.002 [2024-04-18 13:49:51.602727] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.613016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.622551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.622610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.622640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.622656] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.622670] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.632977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.642664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.642725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.642754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.642770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.642783] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.653155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.662743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.662816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.662846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.662867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.662882] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.673264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.682843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.682911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.682952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.682971] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.682985] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.693194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.702906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.702976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.703010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.703027] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.703041] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.713424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.722851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.722914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.722951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.722969] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.722983] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.733430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.743053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.743127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.743156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.743172] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.743186] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.753572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.763082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.763156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.763185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.763201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.763214] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.773587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.783129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.783194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.783224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.783240] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.783252] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.003 [2024-04-18 13:49:51.793524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.003 qpair failed and we were unable to recover it. 00:20:49.003 [2024-04-18 13:49:51.803131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.003 [2024-04-18 13:49:51.803190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.003 [2024-04-18 13:49:51.803220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.003 [2024-04-18 13:49:51.803236] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.003 [2024-04-18 13:49:51.803250] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.813721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.823390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.823463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.823496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.823512] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.823526] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.833662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.843422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.843485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.843521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.843537] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.843550] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.853666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.863323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.863388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.863418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.863434] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.863447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.873839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.883502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.883564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.883595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.883611] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.883625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.893841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.903681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.903750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.903781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.903798] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.903812] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.913916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.923665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.923733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.923765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.923781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.923801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.933888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.943624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.265 [2024-04-18 13:49:51.943688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.265 [2024-04-18 13:49:51.943716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.265 [2024-04-18 13:49:51.943732] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.265 [2024-04-18 13:49:51.943746] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.265 [2024-04-18 13:49:51.953951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.265 qpair failed and we were unable to recover it. 00:20:49.265 [2024-04-18 13:49:51.963842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:51.963902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:51.963934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:51.963962] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:51.963976] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.266 [2024-04-18 13:49:51.974127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.266 qpair failed and we were unable to recover it. 00:20:49.266 [2024-04-18 13:49:51.983881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:51.983968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:51.983997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:51.984013] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:51.984027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.266 [2024-04-18 13:49:51.994193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.266 qpair failed and we were unable to recover it. 00:20:49.266 [2024-04-18 13:49:52.003882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:52.003964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:52.003998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:52.004014] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:52.004028] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.266 [2024-04-18 13:49:52.014091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.266 qpair failed and we were unable to recover it. 00:20:49.266 [2024-04-18 13:49:52.023863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:52.023926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:52.023965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:52.023983] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:52.023996] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.266 [2024-04-18 13:49:52.034415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.266 qpair failed and we were unable to recover it. 00:20:49.266 [2024-04-18 13:49:52.043922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:52.043991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:52.044021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:52.044037] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:52.044050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.266 [2024-04-18 13:49:52.054179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.266 qpair failed and we were unable to recover it. 00:20:49.266 [2024-04-18 13:49:52.064118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.266 [2024-04-18 13:49:52.064194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.266 [2024-04-18 13:49:52.064225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.266 [2024-04-18 13:49:52.064241] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.266 [2024-04-18 13:49:52.064254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.540 [2024-04-18 13:49:52.074236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.540 qpair failed and we were unable to recover it. 00:20:49.540 [2024-04-18 13:49:52.084191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.540 [2024-04-18 13:49:52.084257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.540 [2024-04-18 13:49:52.084291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.540 [2024-04-18 13:49:52.084307] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.540 [2024-04-18 13:49:52.084320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.540 [2024-04-18 13:49:52.094415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.540 qpair failed and we were unable to recover it. 00:20:49.540 [2024-04-18 13:49:52.104204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.540 [2024-04-18 13:49:52.104269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.540 [2024-04-18 13:49:52.104302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.104328] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.104343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.114519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.124345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.124403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.124432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.124448] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.124461] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.134691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.144534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.144610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.144639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.144655] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.144668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.154921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.164477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.164550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.164579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.164595] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.164609] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.174834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.184402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.184465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.184497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.184513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.184527] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.194802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.204506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.204568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.204600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.204617] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.204630] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.214977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.224644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.224714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.224746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.224762] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.224776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.234894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.244831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.244899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.244929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.244954] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.244968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.254902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.264669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.264734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.264766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.264782] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.264796] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.274800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.284740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.284802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.284841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.284858] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.284872] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.295339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.305169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.305241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.305271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.305287] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.305300] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.315341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.541 [2024-04-18 13:49:52.325089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.541 [2024-04-18 13:49:52.325164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.541 [2024-04-18 13:49:52.325193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.541 [2024-04-18 13:49:52.325209] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.541 [2024-04-18 13:49:52.325222] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.541 [2024-04-18 13:49:52.335479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.541 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.345061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.345127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.345156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.345172] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.345185] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.355408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.365189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.365251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.365284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.365301] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.365321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.375408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.384898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.384981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.385014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.385030] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.385044] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.395667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.405115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.405185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.405214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.405230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.405244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.415415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.425272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.425338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.425371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.425387] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.425401] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.435679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:49.799 [2024-04-18 13:49:52.445268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:49.799 [2024-04-18 13:49:52.445331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:49.799 [2024-04-18 13:49:52.445363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:49.799 [2024-04-18 13:49:52.445379] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:49.799 [2024-04-18 13:49:52.445393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:20:49.799 [2024-04-18 13:49:52.455623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:49.799 qpair failed and we were unable to recover it. 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Write completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 Read completed with error (sct=0, sc=8) 00:20:50.731 starting I/O failed 00:20:50.731 [2024-04-18 13:49:53.460985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.731 [2024-04-18 13:49:53.460998] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:50.731 A controller has encountered a failure and is being reset. 00:20:50.731 [2024-04-18 13:49:53.461088] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:20:50.731 [2024-04-18 13:49:53.462880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:50.731 Controller properly reset. 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Write completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 Read completed with error (sct=0, sc=8) 00:20:52.101 starting I/O failed 00:20:52.101 [2024-04-18 13:49:54.494065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:52.101 Initializing NVMe Controllers 00:20:52.101 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.101 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:52.101 Initialization complete. Launching workers. 00:20:52.101 Starting thread on core 1 00:20:52.101 Starting thread on core 2 00:20:52.101 Starting thread on core 3 00:20:52.101 Starting thread on core 0 00:20:52.101 13:49:54 -- host/target_disconnect.sh@59 -- # sync 00:20:52.101 00:20:52.101 real 0m13.022s 00:20:52.101 user 0m24.708s 00:20:52.101 sys 0m3.180s 00:20:52.101 13:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.101 13:49:54 -- common/autotest_common.sh@10 -- # set +x 00:20:52.101 ************************************ 00:20:52.101 END TEST nvmf_target_disconnect_tc2 00:20:52.101 ************************************ 00:20:52.101 13:49:54 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:20:52.101 13:49:54 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:20:52.101 13:49:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:52.101 13:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.101 13:49:54 -- common/autotest_common.sh@10 -- # set +x 00:20:52.101 ************************************ 00:20:52.101 START TEST nvmf_target_disconnect_tc3 00:20:52.101 ************************************ 00:20:52.101 13:49:54 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:20:52.101 13:49:54 -- host/target_disconnect.sh@65 -- # reconnectpid=1210304 00:20:52.101 13:49:54 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:20:52.101 13:49:54 -- host/target_disconnect.sh@67 -- # sleep 2 00:20:52.101 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.997 13:49:56 -- host/target_disconnect.sh@68 -- # kill -9 1209343 00:20:53.997 13:49:56 -- host/target_disconnect.sh@70 -- # sleep 2 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Read completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.367 starting I/O failed 00:20:55.367 Write completed with error (sct=0, sc=8) 00:20:55.368 starting I/O failed 00:20:55.368 [2024-04-18 13:49:57.918855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:55.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1209343 Killed "${NVMF_APP[@]}" "$@" 00:20:55.931 13:49:58 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:20:55.931 13:49:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:55.931 13:49:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:55.931 13:49:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.931 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:55.931 13:49:58 -- nvmf/common.sh@470 -- # nvmfpid=1210828 00:20:55.931 13:49:58 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:55.931 13:49:58 -- nvmf/common.sh@471 -- # waitforlisten 1210828 00:20:55.931 13:49:58 -- common/autotest_common.sh@817 -- # '[' -z 1210828 ']' 00:20:55.931 13:49:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.931 13:49:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.931 13:49:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.931 13:49:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.931 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.189 [2024-04-18 13:49:58.749378] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:56.189 [2024-04-18 13:49:58.749488] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.189 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.189 [2024-04-18 13:49:58.842323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Write completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 Read completed with error (sct=0, sc=8) 00:20:56.189 starting I/O failed 00:20:56.189 [2024-04-18 13:49:58.924653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.189 [2024-04-18 13:49:58.969383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.189 [2024-04-18 13:49:58.969446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.189 [2024-04-18 13:49:58.969463] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.189 [2024-04-18 13:49:58.969477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.189 [2024-04-18 13:49:58.969489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.189 [2024-04-18 13:49:58.969579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:56.189 [2024-04-18 13:49:58.969634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:56.189 [2024-04-18 13:49:58.969688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:56.189 [2024-04-18 13:49:58.969691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.447 13:49:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.447 13:49:59 -- common/autotest_common.sh@850 -- # return 0 00:20:56.447 13:49:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:56.447 13:49:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.447 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.447 13:49:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.447 13:49:59 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:56.447 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.447 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.447 Malloc0 00:20:56.447 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.447 13:49:59 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:56.447 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.447 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.704 [2024-04-18 13:49:59.261187] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16e4200/0x16efdc0) succeed. 00:20:56.704 [2024-04-18 13:49:59.273906] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16e57f0/0x178fec0) succeed. 00:20:56.704 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.704 13:49:59 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.704 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.704 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.704 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.704 13:49:59 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.704 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.704 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.704 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.704 13:49:59 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:20:56.704 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.704 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.704 [2024-04-18 13:49:59.464309] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:56.704 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.704 13:49:59 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:20:56.704 13:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.704 13:49:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.704 13:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.704 13:49:59 -- host/target_disconnect.sh@73 -- # wait 1210304 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Write completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 Read completed with error (sct=0, sc=8) 00:20:57.268 starting I/O failed 00:20:57.268 [2024-04-18 13:49:59.930100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Read completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 Write completed with error (sct=0, sc=8) 00:20:58.198 starting I/O failed 00:20:58.198 [2024-04-18 13:50:00.935557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.198 [2024-04-18 13:50:00.935609] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:58.198 A controller has encountered a failure and is being reset. 00:20:58.198 Resorting to new failover address 192.168.100.9 00:20:58.198 [2024-04-18 13:50:00.935669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.198 [2024-04-18 13:50:00.935721] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:20:58.198 [2024-04-18 13:50:00.954511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:58.198 Controller properly reset. 00:21:02.373 Initializing NVMe Controllers 00:21:02.373 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.373 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:02.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:02.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:02.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:02.373 Initialization complete. Launching workers. 00:21:02.373 Starting thread on core 1 00:21:02.373 Starting thread on core 2 00:21:02.373 Starting thread on core 3 00:21:02.373 Starting thread on core 0 00:21:02.373 13:50:05 -- host/target_disconnect.sh@74 -- # sync 00:21:02.373 00:21:02.373 real 0m10.330s 00:21:02.373 user 1m6.780s 00:21:02.373 sys 0m1.865s 00:21:02.373 13:50:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:02.373 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.373 ************************************ 00:21:02.373 END TEST nvmf_target_disconnect_tc3 00:21:02.373 ************************************ 00:21:02.373 13:50:05 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:02.373 13:50:05 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:02.373 13:50:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:02.373 13:50:05 -- nvmf/common.sh@117 -- # sync 00:21:02.373 13:50:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:02.373 13:50:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:02.373 13:50:05 -- nvmf/common.sh@120 -- # set +e 00:21:02.373 13:50:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.373 13:50:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:02.373 rmmod nvme_rdma 00:21:02.373 rmmod nvme_fabrics 00:21:02.373 13:50:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.373 13:50:05 -- nvmf/common.sh@124 -- # set -e 00:21:02.373 13:50:05 -- nvmf/common.sh@125 -- # return 0 00:21:02.373 13:50:05 -- nvmf/common.sh@478 -- # '[' -n 1210828 ']' 00:21:02.373 13:50:05 -- nvmf/common.sh@479 -- # killprocess 1210828 00:21:02.373 13:50:05 -- common/autotest_common.sh@936 -- # '[' -z 1210828 ']' 00:21:02.373 13:50:05 -- common/autotest_common.sh@940 -- # kill -0 1210828 00:21:02.373 13:50:05 -- common/autotest_common.sh@941 -- # uname 00:21:02.373 13:50:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.373 13:50:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1210828 00:21:02.373 13:50:05 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:02.373 13:50:05 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:02.373 13:50:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1210828' 00:21:02.373 killing process with pid 1210828 00:21:02.373 13:50:05 -- common/autotest_common.sh@955 -- # kill 1210828 00:21:02.373 13:50:05 -- common/autotest_common.sh@960 -- # wait 1210828 00:21:02.938 13:50:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:02.938 13:50:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:02.938 00:21:02.938 real 0m28.396s 00:21:02.938 user 1m59.189s 00:21:02.938 sys 0m7.921s 00:21:02.938 13:50:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:02.938 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.938 ************************************ 00:21:02.938 END TEST nvmf_target_disconnect 00:21:02.938 ************************************ 00:21:02.938 13:50:05 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:02.938 13:50:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:02.938 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.938 13:50:05 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:02.938 00:21:02.938 real 14m20.530s 00:21:02.938 user 41m30.043s 00:21:02.938 sys 2m41.098s 00:21:02.938 13:50:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:02.938 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.938 ************************************ 00:21:02.938 END TEST nvmf_rdma 00:21:02.938 ************************************ 00:21:02.938 13:50:05 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:02.938 13:50:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:02.938 13:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:02.938 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:03.196 ************************************ 00:21:03.196 START TEST spdkcli_nvmf_rdma 00:21:03.196 ************************************ 00:21:03.196 13:50:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:03.196 * Looking for test storage... 00:21:03.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:21:03.196 13:50:05 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:03.196 13:50:05 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.196 13:50:05 -- nvmf/common.sh@7 -- # uname -s 00:21:03.196 13:50:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.196 13:50:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.196 13:50:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.196 13:50:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.196 13:50:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.196 13:50:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.196 13:50:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.196 13:50:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.196 13:50:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.196 13:50:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.196 13:50:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:03.196 13:50:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:21:03.196 13:50:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.196 13:50:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.196 13:50:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.196 13:50:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.196 13:50:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:03.196 13:50:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.196 13:50:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.196 13:50:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.196 13:50:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.196 13:50:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.196 13:50:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.196 13:50:05 -- paths/export.sh@5 -- # export PATH 00:21:03.196 13:50:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.196 13:50:05 -- nvmf/common.sh@47 -- # : 0 00:21:03.196 13:50:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.196 13:50:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.196 13:50:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.196 13:50:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.196 13:50:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.196 13:50:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.196 13:50:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.196 13:50:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:03.196 13:50:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:03.196 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:03.196 13:50:05 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:03.196 13:50:05 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1211752 00:21:03.196 13:50:05 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:03.196 13:50:05 -- spdkcli/common.sh@34 -- # waitforlisten 1211752 00:21:03.196 13:50:05 -- common/autotest_common.sh@817 -- # '[' -z 1211752 ']' 00:21:03.196 13:50:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.196 13:50:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.196 13:50:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.196 13:50:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.196 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:21:03.196 [2024-04-18 13:50:05.881788] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:21:03.196 [2024-04-18 13:50:05.881892] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211752 ] 00:21:03.196 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.196 [2024-04-18 13:50:05.966971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:03.454 [2024-04-18 13:50:06.089372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.454 [2024-04-18 13:50:06.089377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.387 13:50:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:04.387 13:50:07 -- common/autotest_common.sh@850 -- # return 0 00:21:04.387 13:50:07 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:04.387 13:50:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:04.387 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:21:04.387 13:50:07 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:04.387 13:50:07 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:21:04.387 13:50:07 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:21:04.387 13:50:07 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:04.387 13:50:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.387 13:50:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:04.387 13:50:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:04.387 13:50:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:04.387 13:50:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.387 13:50:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.387 13:50:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.387 13:50:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:04.387 13:50:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:04.387 13:50:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.387 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.664 13:50:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:07.664 13:50:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.664 13:50:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.664 13:50:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.664 13:50:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.664 13:50:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.664 13:50:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.664 13:50:09 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.664 13:50:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.664 13:50:09 -- nvmf/common.sh@296 -- # e810=() 00:21:07.664 13:50:09 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.664 13:50:09 -- nvmf/common.sh@297 -- # x722=() 00:21:07.664 13:50:09 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.664 13:50:09 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.664 13:50:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.664 13:50:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.664 13:50:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.664 13:50:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.664 13:50:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.665 13:50:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:21:07.665 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:21:07.665 13:50:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.665 13:50:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:21:07.665 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:21:07.665 13:50:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.665 13:50:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.665 13:50:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.665 13:50:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:21:07.665 Found net devices under 0000:81:00.0: mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.665 13:50:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.665 13:50:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:21:07.665 Found net devices under 0000:81:00.1: mlx_0_1 00:21:07.665 13:50:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.665 13:50:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:07.665 13:50:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:07.665 13:50:09 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:07.665 13:50:09 -- nvmf/common.sh@58 -- # uname 00:21:07.665 13:50:09 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:07.665 13:50:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:07.665 13:50:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:07.665 13:50:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:07.665 13:50:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:07.665 13:50:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:07.665 13:50:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:07.665 13:50:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:07.665 13:50:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:07.665 13:50:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:07.665 13:50:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:07.665 13:50:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.665 13:50:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.665 13:50:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.665 13:50:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.665 13:50:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@105 -- # continue 2 00:21:07.665 13:50:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.665 13:50:09 -- nvmf/common.sh@105 -- # continue 2 00:21:07.665 13:50:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.665 13:50:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.665 13:50:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:07.665 13:50:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:07.665 313: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.665 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:21:07.665 altname enp129s0f0np0 00:21:07.665 inet 192.168.100.8/24 scope global mlx_0_0 00:21:07.665 valid_lft forever preferred_lft forever 00:21:07.665 13:50:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.665 13:50:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:07.665 13:50:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.665 13:50:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.665 13:50:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:07.665 13:50:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:07.665 314: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.665 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:21:07.665 altname enp129s0f1np1 00:21:07.665 inet 192.168.100.9/24 scope global mlx_0_1 00:21:07.665 valid_lft forever preferred_lft forever 00:21:07.665 13:50:09 -- nvmf/common.sh@411 -- # return 0 00:21:07.665 13:50:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:07.665 13:50:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:07.665 13:50:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:07.665 13:50:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:07.665 13:50:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.665 13:50:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.665 13:50:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.665 13:50:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.665 13:50:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.665 13:50:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.665 13:50:09 -- nvmf/common.sh@105 -- # continue 2 00:21:07.665 13:50:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.665 13:50:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.665 13:50:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.666 13:50:09 -- nvmf/common.sh@105 -- # continue 2 00:21:07.666 13:50:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.666 13:50:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:07.666 13:50:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.666 13:50:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.666 13:50:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:07.666 13:50:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.666 13:50:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.666 13:50:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:07.666 192.168.100.9' 00:21:07.666 13:50:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:07.666 192.168.100.9' 00:21:07.666 13:50:09 -- nvmf/common.sh@446 -- # head -n 1 00:21:07.666 13:50:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:07.666 13:50:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:07.666 192.168.100.9' 00:21:07.666 13:50:09 -- nvmf/common.sh@447 -- # tail -n +2 00:21:07.666 13:50:09 -- nvmf/common.sh@447 -- # head -n 1 00:21:07.666 13:50:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:07.666 13:50:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:07.666 13:50:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:07.666 13:50:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:07.666 13:50:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:07.666 13:50:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:07.666 13:50:09 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:21:07.666 13:50:09 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:07.666 13:50:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.666 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:21:07.666 13:50:09 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:07.666 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:07.666 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:07.666 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:07.666 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:07.666 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:07.666 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:07.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:07.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:07.666 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:07.666 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:07.666 ' 00:21:07.666 [2024-04-18 13:50:10.410393] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:10.237 [2024-04-18 13:50:12.625133] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b6c050/0x1cbdcc0) succeed. 00:21:10.237 [2024-04-18 13:50:12.640882] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b6d370/0x1b7dac0) succeed. 00:21:11.169 [2024-04-18 13:50:13.960010] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:21:13.695 [2024-04-18 13:50:16.247238] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:21:15.598 [2024-04-18 13:50:18.221680] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:16.971 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:16.971 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:16.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:16.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:16.971 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:16.971 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:16.971 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:17.229 13:50:19 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:17.229 13:50:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.229 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 13:50:19 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:17.229 13:50:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.229 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 13:50:19 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:17.229 13:50:19 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:17.794 13:50:20 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:17.794 13:50:20 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:17.794 13:50:20 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:17.794 13:50:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.794 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:21:17.794 13:50:20 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:17.794 13:50:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.794 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:21:17.794 13:50:20 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:17.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:17.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:17.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:17.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:21:17.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:21:17.794 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:17.794 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:17.794 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:17.794 ' 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:21:23.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:21:23.085 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:23.085 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:23.085 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:23.085 13:50:25 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:23.085 13:50:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.085 13:50:25 -- common/autotest_common.sh@10 -- # set +x 00:21:23.085 13:50:25 -- spdkcli/nvmf.sh@90 -- # killprocess 1211752 00:21:23.085 13:50:25 -- common/autotest_common.sh@936 -- # '[' -z 1211752 ']' 00:21:23.085 13:50:25 -- common/autotest_common.sh@940 -- # kill -0 1211752 00:21:23.085 13:50:25 -- common/autotest_common.sh@941 -- # uname 00:21:23.085 13:50:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.085 13:50:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1211752 00:21:23.085 13:50:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.085 13:50:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.085 13:50:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1211752' 00:21:23.085 killing process with pid 1211752 00:21:23.085 13:50:25 -- common/autotest_common.sh@955 -- # kill 1211752 00:21:23.085 [2024-04-18 13:50:25.758507] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:23.085 13:50:25 -- common/autotest_common.sh@960 -- # wait 1211752 00:21:23.342 13:50:26 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:21:23.342 13:50:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:23.342 13:50:26 -- nvmf/common.sh@117 -- # sync 00:21:23.342 13:50:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:23.342 13:50:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:23.342 13:50:26 -- nvmf/common.sh@120 -- # set +e 00:21:23.342 13:50:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.342 13:50:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:23.342 rmmod nvme_rdma 00:21:23.599 rmmod nvme_fabrics 00:21:23.599 13:50:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.599 13:50:26 -- nvmf/common.sh@124 -- # set -e 00:21:23.599 13:50:26 -- nvmf/common.sh@125 -- # return 0 00:21:23.599 13:50:26 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:21:23.599 13:50:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.599 13:50:26 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:23.599 00:21:23.599 real 0m20.418s 00:21:23.599 user 0m43.981s 00:21:23.599 sys 0m2.985s 00:21:23.599 13:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:23.599 13:50:26 -- common/autotest_common.sh@10 -- # set +x 00:21:23.599 ************************************ 00:21:23.599 END TEST spdkcli_nvmf_rdma 00:21:23.599 ************************************ 00:21:23.599 13:50:26 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:21:23.599 13:50:26 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:23.599 13:50:26 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:23.599 13:50:26 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:21:23.599 13:50:26 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:21:23.599 13:50:26 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:21:23.599 13:50:26 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:21:23.599 13:50:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.599 13:50:26 -- common/autotest_common.sh@10 -- # set +x 00:21:23.599 13:50:26 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:21:23.599 13:50:26 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:21:23.599 13:50:26 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:21:23.599 13:50:26 -- common/autotest_common.sh@10 -- # set +x 00:21:26.145 INFO: APP EXITING 00:21:26.145 INFO: killing all VMs 00:21:26.145 INFO: killing vhost app 00:21:26.145 WARN: no vhost pid file found 00:21:26.146 INFO: EXIT DONE 00:21:27.079 Waiting for block devices as requested 00:21:27.079 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:21:27.337 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:21:27.337 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:21:27.337 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:21:27.337 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:21:27.594 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:21:27.594 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:21:27.594 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:21:27.594 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:21:27.851 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:21:27.851 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:21:27.851 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:21:27.851 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:21:28.109 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:21:28.109 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:21:28.109 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:21:28.109 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:21:30.008 Cleaning 00:21:30.008 Removing: /var/run/dpdk/spdk0/config 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:21:30.008 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:30.008 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:30.008 Removing: /var/run/dpdk/spdk1/config 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:21:30.008 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:30.008 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:30.008 Removing: /var/run/dpdk/spdk1/mp_socket 00:21:30.008 Removing: /var/run/dpdk/spdk2/config 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:21:30.008 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:30.008 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:30.008 Removing: /var/run/dpdk/spdk3/config 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:21:30.008 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:30.008 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:30.008 Removing: /var/run/dpdk/spdk4/config 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:21:30.008 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:30.008 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:30.008 Removing: /dev/shm/bdevperf_trace.pid1092483 00:21:30.008 Removing: /dev/shm/bdevperf_trace.pid1160055 00:21:30.008 Removing: /dev/shm/bdev_svc_trace.1 00:21:30.008 Removing: /dev/shm/nvmf_trace.0 00:21:30.008 Removing: /dev/shm/spdk_tgt_trace.pid1014555 00:21:30.008 Removing: /var/run/dpdk/spdk0 00:21:30.008 Removing: /var/run/dpdk/spdk1 00:21:30.008 Removing: /var/run/dpdk/spdk2 00:21:30.008 Removing: /var/run/dpdk/spdk3 00:21:30.008 Removing: /var/run/dpdk/spdk4 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1012775 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1013542 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1014555 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1015104 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1015803 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1015943 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1016682 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1016753 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1017067 00:21:30.008 Removing: /var/run/dpdk/spdk_pid1020281 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1021453 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1021782 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1021974 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1022315 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1022520 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1022694 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1022859 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1023166 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1023641 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1026525 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1026698 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1026900 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1026998 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1027452 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1027569 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028027 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028145 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028428 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028457 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028630 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1028761 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1029267 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1029431 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1029639 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1029941 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1029978 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1030186 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1030415 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1030637 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1030800 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1031091 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1031248 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1031537 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1031701 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1031934 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1032152 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1032312 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1032599 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1032765 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1033041 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1033214 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1033495 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1033657 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1033919 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1034113 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1034314 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1034565 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1034761 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1034982 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1037481 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1066801 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1069685 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1075708 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1079139 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1081383 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1082130 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1092483 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1092629 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1095426 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1099547 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1101855 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1108666 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1124945 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1127497 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1138776 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1158515 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1159212 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1160055 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1162802 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1167249 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1167913 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1168568 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1169231 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1169500 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1172507 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1172512 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1175556 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1175952 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1176341 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1176823 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1176876 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1179780 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1180120 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1183058 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1184911 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1190263 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1190272 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1204423 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1204676 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1208629 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1208818 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1210304 00:21:30.009 Removing: /var/run/dpdk/spdk_pid1211752 00:21:30.009 Clean 00:21:30.267 13:50:32 -- common/autotest_common.sh@1437 -- # return 0 00:21:30.267 13:50:32 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:21:30.267 13:50:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.267 13:50:32 -- common/autotest_common.sh@10 -- # set +x 00:21:30.267 13:50:32 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:21:30.267 13:50:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.267 13:50:32 -- common/autotest_common.sh@10 -- # set +x 00:21:30.267 13:50:32 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:21:30.267 13:50:32 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:21:30.267 13:50:32 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:21:30.267 13:50:33 -- spdk/autotest.sh@389 -- # hash lcov 00:21:30.267 13:50:33 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:30.267 13:50:33 -- spdk/autotest.sh@391 -- # hostname 00:21:30.267 13:50:33 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-14 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:21:30.526 geninfo: WARNING: invalid characters removed from testname! 00:22:26.735 13:51:23 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:26.735 13:51:27 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:33.304 13:51:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:41.413 13:51:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:47.970 13:51:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:56.080 13:51:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:02.641 13:52:05 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:02.641 13:52:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:02.641 13:52:05 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:02.641 13:52:05 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.641 13:52:05 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.641 13:52:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.641 13:52:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.641 13:52:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.641 13:52:05 -- paths/export.sh@5 -- $ export PATH 00:23:02.641 13:52:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.641 13:52:05 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:23:02.641 13:52:05 -- common/autobuild_common.sh@435 -- $ date +%s 00:23:02.641 13:52:05 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713441125.XXXXXX 00:23:02.641 13:52:05 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713441125.wbWkDc 00:23:02.641 13:52:05 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:23:02.641 13:52:05 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:23:02.641 13:52:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:23:02.641 13:52:05 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:23:02.641 13:52:05 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:23:02.641 13:52:05 -- common/autobuild_common.sh@451 -- $ get_config_params 00:23:02.641 13:52:05 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:23:02.641 13:52:05 -- common/autotest_common.sh@10 -- $ set +x 00:23:02.899 13:52:05 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:23:02.899 13:52:05 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:23:02.899 13:52:05 -- pm/common@17 -- $ local monitor 00:23:02.899 13:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.899 13:52:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1224206 00:23:02.899 13:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.899 13:52:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1224208 00:23:02.899 13:52:05 -- pm/common@21 -- $ date +%s 00:23:02.899 13:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.899 13:52:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1224210 00:23:02.899 13:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.899 13:52:05 -- pm/common@21 -- $ date +%s 00:23:02.899 13:52:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1224213 00:23:02.899 13:52:05 -- pm/common@21 -- $ date +%s 00:23:02.899 13:52:05 -- pm/common@26 -- $ sleep 1 00:23:02.899 13:52:05 -- pm/common@21 -- $ date +%s 00:23:02.899 13:52:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441125 00:23:02.899 13:52:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441125 00:23:02.899 13:52:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441125 00:23:02.899 13:52:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441125 00:23:02.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441125_collect-bmc-pm.bmc.pm.log 00:23:02.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441125_collect-vmstat.pm.log 00:23:02.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441125_collect-cpu-load.pm.log 00:23:02.899 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441125_collect-cpu-temp.pm.log 00:23:03.831 13:52:06 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:23:03.831 13:52:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:23:03.831 13:52:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:03.831 13:52:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:03.831 13:52:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:03.831 13:52:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:03.831 13:52:06 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:03.831 13:52:06 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:03.831 13:52:06 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:03.831 13:52:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:03.831 13:52:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:03.831 13:52:06 -- pm/common@30 -- $ signal_monitor_resources TERM 00:23:03.831 13:52:06 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:23:03.831 13:52:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.831 13:52:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:23:03.831 13:52:06 -- pm/common@45 -- $ pid=1224222 00:23:03.831 13:52:06 -- pm/common@52 -- $ sudo kill -TERM 1224222 00:23:03.831 13:52:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.831 13:52:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:23:03.831 13:52:06 -- pm/common@45 -- $ pid=1224223 00:23:03.831 13:52:06 -- pm/common@52 -- $ sudo kill -TERM 1224223 00:23:03.831 13:52:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.831 13:52:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:23:03.831 13:52:06 -- pm/common@45 -- $ pid=1224221 00:23:03.831 13:52:06 -- pm/common@52 -- $ sudo kill -TERM 1224221 00:23:03.831 13:52:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.831 13:52:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:23:03.831 13:52:06 -- pm/common@45 -- $ pid=1224220 00:23:03.831 13:52:06 -- pm/common@52 -- $ sudo kill -TERM 1224220 00:23:03.831 + [[ -n 924016 ]] 00:23:03.831 + sudo kill 924016 00:23:04.097 [Pipeline] } 00:23:04.113 [Pipeline] // stage 00:23:04.116 [Pipeline] } 00:23:04.129 [Pipeline] // timeout 00:23:04.133 [Pipeline] } 00:23:04.146 [Pipeline] // catchError 00:23:04.150 [Pipeline] } 00:23:04.166 [Pipeline] // wrap 00:23:04.174 [Pipeline] } 00:23:04.190 [Pipeline] // catchError 00:23:04.198 [Pipeline] stage 00:23:04.201 [Pipeline] { (Epilogue) 00:23:04.216 [Pipeline] catchError 00:23:04.218 [Pipeline] { 00:23:04.233 [Pipeline] echo 00:23:04.235 Cleanup processes 00:23:04.240 [Pipeline] sh 00:23:04.518 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:04.518 1224349 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:23:04.518 1224487 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:04.531 [Pipeline] sh 00:23:04.808 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:04.808 ++ grep -v 'sudo pgrep' 00:23:04.808 ++ awk '{print $1}' 00:23:04.808 + sudo kill -9 1224349 00:23:04.819 [Pipeline] sh 00:23:05.099 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:15.081 [Pipeline] sh 00:23:15.360 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:15.360 Artifacts sizes are good 00:23:15.374 [Pipeline] archiveArtifacts 00:23:15.382 Archiving artifacts 00:23:15.525 [Pipeline] sh 00:23:15.808 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:23:15.823 [Pipeline] cleanWs 00:23:15.833 [WS-CLEANUP] Deleting project workspace... 00:23:15.833 [WS-CLEANUP] Deferred wipeout is used... 00:23:15.839 [WS-CLEANUP] done 00:23:15.841 [Pipeline] } 00:23:15.862 [Pipeline] // catchError 00:23:15.874 [Pipeline] sh 00:23:16.152 + logger -p user.info -t JENKINS-CI 00:23:16.162 [Pipeline] } 00:23:16.180 [Pipeline] // stage 00:23:16.185 [Pipeline] } 00:23:16.200 [Pipeline] // node 00:23:16.205 [Pipeline] End of Pipeline 00:23:16.247 Finished: SUCCESS